Test Report: Docker_Linux_crio_arm64 21794

                    
                      1ae3cc206fa1c5283cece957f99367f4350f676e:2025-10-25:42054
                    
                

Test fail (37/326)

Order failed test Duration
29 TestAddons/serial/Volcano 0.77
35 TestAddons/parallel/Registry 15.63
36 TestAddons/parallel/RegistryCreds 0.54
37 TestAddons/parallel/Ingress 143.69
38 TestAddons/parallel/InspektorGadget 5.29
39 TestAddons/parallel/MetricsServer 5.36
41 TestAddons/parallel/CSI 58.98
42 TestAddons/parallel/Headlamp 3.86
43 TestAddons/parallel/CloudSpanner 5.34
44 TestAddons/parallel/LocalPath 8.52
45 TestAddons/parallel/NvidiaDevicePlugin 6.33
46 TestAddons/parallel/Yakd 6.27
97 TestFunctional/parallel/ServiceCmdConnect 603.54
125 TestFunctional/parallel/ServiceCmd/DeployApp 600.95
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.5
135 TestFunctional/parallel/ServiceCmd/Format 0.48
136 TestFunctional/parallel/ServiceCmd/URL 0.57
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.36
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.49
150 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.36
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.41
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.22
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
190 TestJSONOutput/pause/Command 1.82
196 TestJSONOutput/unpause/Command 2.05
249 TestScheduledStopUnix 40.02
280 TestPause/serial/Pause 7.31
295 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.44
302 TestStartStop/group/old-k8s-version/serial/Pause 6.51
308 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.49
313 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 3.29
320 TestStartStop/group/default-k8s-diff-port/serial/Pause 7.63
326 TestStartStop/group/embed-certs/serial/Pause 7.62
330 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 3.69
333 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.48
342 TestStartStop/group/newest-cni/serial/Pause 9
347 TestStartStop/group/no-preload/serial/Pause 6.44
x
+
TestAddons/serial/Volcano (0.77s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-523976 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-523976 addons disable volcano --alsologtostderr -v=1: exit status 11 (764.810715ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:35:27.517615  300766 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:35:27.519202  300766 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:27.519217  300766 out.go:374] Setting ErrFile to fd 2...
	I1025 09:35:27.519223  300766 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:27.519537  300766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 09:35:27.519848  300766 mustload.go:65] Loading cluster: addons-523976
	I1025 09:35:27.520236  300766 config.go:182] Loaded profile config "addons-523976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:27.520254  300766 addons.go:606] checking whether the cluster is paused
	I1025 09:35:27.520357  300766 config.go:182] Loaded profile config "addons-523976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:27.520372  300766 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:35:27.520814  300766 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:35:27.561290  300766 ssh_runner.go:195] Run: systemctl --version
	I1025 09:35:27.561356  300766 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:35:27.578332  300766 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:35:27.682135  300766 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:35:27.682230  300766 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:35:27.718335  300766 cri.go:89] found id: "ed2b16c18354ffa6267ae4706c62ecb69903697108528bf3863d2c2f803003ab"
	I1025 09:35:27.718359  300766 cri.go:89] found id: "5aa333b5df5f3ee286cd80ea5e23a63933f660a1b393c5976044b6e482a2c97d"
	I1025 09:35:27.718375  300766 cri.go:89] found id: "7fa48c16691b308c6bf2eb952b017336f242e28234f560131888e2d56e48e118"
	I1025 09:35:27.718379  300766 cri.go:89] found id: "dba2f43dd64c7e26ab6c50f62acff63a9a44dd4cb23eb4a87be444c1da4780f8"
	I1025 09:35:27.718383  300766 cri.go:89] found id: "a28457a8a0b99896fc7bf9cfedc8e6bb01604c4ff4898d1fa276c7414283da93"
	I1025 09:35:27.718386  300766 cri.go:89] found id: "50e4b1142cbe65528bd1fbea23f4a7cf8003971e787273be5ee9d761dd7d289a"
	I1025 09:35:27.718390  300766 cri.go:89] found id: "39f4662ae9889bd547a3a4f4113124067f374286ab741f16041dc41fb4114334"
	I1025 09:35:27.718394  300766 cri.go:89] found id: "1b9e3466e10d18a916b8b3ec8e9512ca940197df8b87e324321530625c61950d"
	I1025 09:35:27.718397  300766 cri.go:89] found id: "0fb942eed357aedba394a71016114374a6b2ed13681c0c5c38b14300e6a78c97"
	I1025 09:35:27.718403  300766 cri.go:89] found id: "cf39d704be2f7601f61a7c4b76d66ca29278bce7cb61230b512dc0e395167c83"
	I1025 09:35:27.718407  300766 cri.go:89] found id: "76b94cc8a56cdd1dd2786cce2dee6201336592abca494e3c2d0a0c44be0abba4"
	I1025 09:35:27.718410  300766 cri.go:89] found id: "bd4a3acf65df2db5f82683dcc498529829e037415304a75de206142446234ac5"
	I1025 09:35:27.718414  300766 cri.go:89] found id: "3552646870f0354b62b73c1619bc8ef8234250c45937619a3d46141689e64913"
	I1025 09:35:27.718417  300766 cri.go:89] found id: "1671d001e906d6b3e01b5bc2e97abf3d2e04f6eb1403aa9fa3280a571355cb59"
	I1025 09:35:27.718420  300766 cri.go:89] found id: "526fdc9a670ac5982ffa9e1688d32a36fac0c0a2596260db2fb60eec5c732d15"
	I1025 09:35:27.718425  300766 cri.go:89] found id: "baef1ffd7c044365d9d03e5ce421585250bb6e1abf553ca3388e8ed456f0238b"
	I1025 09:35:27.718433  300766 cri.go:89] found id: "08942e9bb2ed5fa71628fa7fb77f43e33bb0c653db62df711678b474d20c96ec"
	I1025 09:35:27.718436  300766 cri.go:89] found id: "1522372d280f9cea2d797057a2ac964089f52ed80759c765101c5ae665e9beaa"
	I1025 09:35:27.718439  300766 cri.go:89] found id: "a34c99943c9360cd74f2410f3e1dc4a77dcfde99ee71da412b0ee5916e892d1b"
	I1025 09:35:27.718442  300766 cri.go:89] found id: "dc95e59147e2eec17d430ab406fa8f6b59a62de8f50462f7a609116945f95fd1"
	I1025 09:35:27.718446  300766 cri.go:89] found id: "3f9ccf0f1d26a6dd05ec9d553991a3c6f9f4da2ba3138959689b1a91c99e7666"
	I1025 09:35:27.718449  300766 cri.go:89] found id: "20bbcad0ad16d6b2ae5b3ca2e2ab5ba1053638cd0640b1fb4d021b25a50f8234"
	I1025 09:35:27.718452  300766 cri.go:89] found id: "78f4542e06d9b5434e621ac06a3ce1eec88a9d17fee72f530244bb01d81a42bb"
	I1025 09:35:27.718455  300766 cri.go:89] found id: ""
	I1025 09:35:27.718507  300766 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:35:27.733785  300766 out.go:203] 
	W1025 09:35:27.736898  300766 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:35:27.736926  300766 out.go:285] * 
	* 
	W1025 09:35:28.189868  300766 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:35:28.192896  300766 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-523976 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.77s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 6.166328ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-zbqtr" [85df0936-f76b-4735-8421-1890c338b1a7] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003344708s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-2kb6l" [f1663764-ddd7-4002-a86b-adaa75c4a254] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004164313s
addons_test.go:392: (dbg) Run:  kubectl --context addons-523976 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-523976 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-523976 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.782624569s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-523976 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-523976 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-523976 addons disable registry --alsologtostderr -v=1: exit status 11 (267.075387ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:35:53.863620  301285 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:35:53.864399  301285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:53.864419  301285 out.go:374] Setting ErrFile to fd 2...
	I1025 09:35:53.864426  301285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:53.864690  301285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 09:35:53.864981  301285 mustload.go:65] Loading cluster: addons-523976
	I1025 09:35:53.865352  301285 config.go:182] Loaded profile config "addons-523976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:53.865370  301285 addons.go:606] checking whether the cluster is paused
	I1025 09:35:53.865475  301285 config.go:182] Loaded profile config "addons-523976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:53.865491  301285 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:35:53.865947  301285 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:35:53.883129  301285 ssh_runner.go:195] Run: systemctl --version
	I1025 09:35:53.883233  301285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:35:53.901521  301285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:35:54.009650  301285 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:35:54.009751  301285 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:35:54.044039  301285 cri.go:89] found id: "ed2b16c18354ffa6267ae4706c62ecb69903697108528bf3863d2c2f803003ab"
	I1025 09:35:54.044062  301285 cri.go:89] found id: "5aa333b5df5f3ee286cd80ea5e23a63933f660a1b393c5976044b6e482a2c97d"
	I1025 09:35:54.044068  301285 cri.go:89] found id: "7fa48c16691b308c6bf2eb952b017336f242e28234f560131888e2d56e48e118"
	I1025 09:35:54.044072  301285 cri.go:89] found id: "dba2f43dd64c7e26ab6c50f62acff63a9a44dd4cb23eb4a87be444c1da4780f8"
	I1025 09:35:54.044075  301285 cri.go:89] found id: "a28457a8a0b99896fc7bf9cfedc8e6bb01604c4ff4898d1fa276c7414283da93"
	I1025 09:35:54.044079  301285 cri.go:89] found id: "50e4b1142cbe65528bd1fbea23f4a7cf8003971e787273be5ee9d761dd7d289a"
	I1025 09:35:54.044082  301285 cri.go:89] found id: "39f4662ae9889bd547a3a4f4113124067f374286ab741f16041dc41fb4114334"
	I1025 09:35:54.044084  301285 cri.go:89] found id: "1b9e3466e10d18a916b8b3ec8e9512ca940197df8b87e324321530625c61950d"
	I1025 09:35:54.044087  301285 cri.go:89] found id: "0fb942eed357aedba394a71016114374a6b2ed13681c0c5c38b14300e6a78c97"
	I1025 09:35:54.044093  301285 cri.go:89] found id: "cf39d704be2f7601f61a7c4b76d66ca29278bce7cb61230b512dc0e395167c83"
	I1025 09:35:54.044097  301285 cri.go:89] found id: "76b94cc8a56cdd1dd2786cce2dee6201336592abca494e3c2d0a0c44be0abba4"
	I1025 09:35:54.044104  301285 cri.go:89] found id: "bd4a3acf65df2db5f82683dcc498529829e037415304a75de206142446234ac5"
	I1025 09:35:54.044111  301285 cri.go:89] found id: "3552646870f0354b62b73c1619bc8ef8234250c45937619a3d46141689e64913"
	I1025 09:35:54.044115  301285 cri.go:89] found id: "1671d001e906d6b3e01b5bc2e97abf3d2e04f6eb1403aa9fa3280a571355cb59"
	I1025 09:35:54.044118  301285 cri.go:89] found id: "526fdc9a670ac5982ffa9e1688d32a36fac0c0a2596260db2fb60eec5c732d15"
	I1025 09:35:54.044123  301285 cri.go:89] found id: "baef1ffd7c044365d9d03e5ce421585250bb6e1abf553ca3388e8ed456f0238b"
	I1025 09:35:54.044129  301285 cri.go:89] found id: "08942e9bb2ed5fa71628fa7fb77f43e33bb0c653db62df711678b474d20c96ec"
	I1025 09:35:54.044132  301285 cri.go:89] found id: "1522372d280f9cea2d797057a2ac964089f52ed80759c765101c5ae665e9beaa"
	I1025 09:35:54.044135  301285 cri.go:89] found id: "a34c99943c9360cd74f2410f3e1dc4a77dcfde99ee71da412b0ee5916e892d1b"
	I1025 09:35:54.044138  301285 cri.go:89] found id: "dc95e59147e2eec17d430ab406fa8f6b59a62de8f50462f7a609116945f95fd1"
	I1025 09:35:54.044143  301285 cri.go:89] found id: "3f9ccf0f1d26a6dd05ec9d553991a3c6f9f4da2ba3138959689b1a91c99e7666"
	I1025 09:35:54.044146  301285 cri.go:89] found id: "20bbcad0ad16d6b2ae5b3ca2e2ab5ba1053638cd0640b1fb4d021b25a50f8234"
	I1025 09:35:54.044150  301285 cri.go:89] found id: "78f4542e06d9b5434e621ac06a3ce1eec88a9d17fee72f530244bb01d81a42bb"
	I1025 09:35:54.044157  301285 cri.go:89] found id: ""
	I1025 09:35:54.044217  301285 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:35:54.059957  301285 out.go:203] 
	W1025 09:35:54.063613  301285 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:35:54.063639  301285 out.go:285] * 
	* 
	W1025 09:35:54.069943  301285 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:35:54.073703  301285 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-523976 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.63s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.54s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 2.787811ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-523976
addons_test.go:332: (dbg) Run:  kubectl --context addons-523976 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-523976 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-523976 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (260.406549ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:36:58.875144  303428 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:36:58.876658  303428 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:36:58.876708  303428 out.go:374] Setting ErrFile to fd 2...
	I1025 09:36:58.876730  303428 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:36:58.877029  303428 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 09:36:58.877357  303428 mustload.go:65] Loading cluster: addons-523976
	I1025 09:36:58.877784  303428 config.go:182] Loaded profile config "addons-523976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:36:58.877829  303428 addons.go:606] checking whether the cluster is paused
	I1025 09:36:58.877958  303428 config.go:182] Loaded profile config "addons-523976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:36:58.877992  303428 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:36:58.878457  303428 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:36:58.897092  303428 ssh_runner.go:195] Run: systemctl --version
	I1025 09:36:58.897224  303428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:36:58.915414  303428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:36:59.018285  303428 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:36:59.018362  303428 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:36:59.049871  303428 cri.go:89] found id: "ed2b16c18354ffa6267ae4706c62ecb69903697108528bf3863d2c2f803003ab"
	I1025 09:36:59.049894  303428 cri.go:89] found id: "5aa333b5df5f3ee286cd80ea5e23a63933f660a1b393c5976044b6e482a2c97d"
	I1025 09:36:59.049898  303428 cri.go:89] found id: "7fa48c16691b308c6bf2eb952b017336f242e28234f560131888e2d56e48e118"
	I1025 09:36:59.049902  303428 cri.go:89] found id: "dba2f43dd64c7e26ab6c50f62acff63a9a44dd4cb23eb4a87be444c1da4780f8"
	I1025 09:36:59.049906  303428 cri.go:89] found id: "a28457a8a0b99896fc7bf9cfedc8e6bb01604c4ff4898d1fa276c7414283da93"
	I1025 09:36:59.049910  303428 cri.go:89] found id: "50e4b1142cbe65528bd1fbea23f4a7cf8003971e787273be5ee9d761dd7d289a"
	I1025 09:36:59.049913  303428 cri.go:89] found id: "39f4662ae9889bd547a3a4f4113124067f374286ab741f16041dc41fb4114334"
	I1025 09:36:59.049917  303428 cri.go:89] found id: "1b9e3466e10d18a916b8b3ec8e9512ca940197df8b87e324321530625c61950d"
	I1025 09:36:59.049920  303428 cri.go:89] found id: "0fb942eed357aedba394a71016114374a6b2ed13681c0c5c38b14300e6a78c97"
	I1025 09:36:59.049927  303428 cri.go:89] found id: "cf39d704be2f7601f61a7c4b76d66ca29278bce7cb61230b512dc0e395167c83"
	I1025 09:36:59.049930  303428 cri.go:89] found id: "76b94cc8a56cdd1dd2786cce2dee6201336592abca494e3c2d0a0c44be0abba4"
	I1025 09:36:59.049934  303428 cri.go:89] found id: "bd4a3acf65df2db5f82683dcc498529829e037415304a75de206142446234ac5"
	I1025 09:36:59.049938  303428 cri.go:89] found id: "3552646870f0354b62b73c1619bc8ef8234250c45937619a3d46141689e64913"
	I1025 09:36:59.049941  303428 cri.go:89] found id: "1671d001e906d6b3e01b5bc2e97abf3d2e04f6eb1403aa9fa3280a571355cb59"
	I1025 09:36:59.049945  303428 cri.go:89] found id: "526fdc9a670ac5982ffa9e1688d32a36fac0c0a2596260db2fb60eec5c732d15"
	I1025 09:36:59.049951  303428 cri.go:89] found id: "baef1ffd7c044365d9d03e5ce421585250bb6e1abf553ca3388e8ed456f0238b"
	I1025 09:36:59.049957  303428 cri.go:89] found id: "08942e9bb2ed5fa71628fa7fb77f43e33bb0c653db62df711678b474d20c96ec"
	I1025 09:36:59.049962  303428 cri.go:89] found id: "1522372d280f9cea2d797057a2ac964089f52ed80759c765101c5ae665e9beaa"
	I1025 09:36:59.049965  303428 cri.go:89] found id: "a34c99943c9360cd74f2410f3e1dc4a77dcfde99ee71da412b0ee5916e892d1b"
	I1025 09:36:59.049968  303428 cri.go:89] found id: "dc95e59147e2eec17d430ab406fa8f6b59a62de8f50462f7a609116945f95fd1"
	I1025 09:36:59.049973  303428 cri.go:89] found id: "3f9ccf0f1d26a6dd05ec9d553991a3c6f9f4da2ba3138959689b1a91c99e7666"
	I1025 09:36:59.049977  303428 cri.go:89] found id: "20bbcad0ad16d6b2ae5b3ca2e2ab5ba1053638cd0640b1fb4d021b25a50f8234"
	I1025 09:36:59.049980  303428 cri.go:89] found id: "78f4542e06d9b5434e621ac06a3ce1eec88a9d17fee72f530244bb01d81a42bb"
	I1025 09:36:59.049983  303428 cri.go:89] found id: ""
	I1025 09:36:59.050035  303428 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:36:59.064716  303428 out.go:203] 
	W1025 09:36:59.067720  303428 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:36:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:36:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:36:59.067763  303428 out.go:285] * 
	* 
	W1025 09:36:59.074062  303428 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:36:59.077572  303428 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-523976 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.54s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (143.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-523976 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-523976 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-523976 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [0da13c2c-06ca-4eb0-9d62-a38ca1b91e67] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [0da13c2c-06ca-4eb0-9d62-a38ca1b91e67] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.002921843s
I1025 09:36:23.555461  294017 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-523976 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-523976 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.969310677s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-523976 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-523976 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-523976
helpers_test.go:243: (dbg) docker inspect addons-523976:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9fc15dbb1b0abf2c8c465291568d03010a5cd699b548062f138e8f27aea60bc1",
	        "Created": "2025-10-25T09:32:59.9140353Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 295176,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:32:59.987197113Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/9fc15dbb1b0abf2c8c465291568d03010a5cd699b548062f138e8f27aea60bc1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9fc15dbb1b0abf2c8c465291568d03010a5cd699b548062f138e8f27aea60bc1/hostname",
	        "HostsPath": "/var/lib/docker/containers/9fc15dbb1b0abf2c8c465291568d03010a5cd699b548062f138e8f27aea60bc1/hosts",
	        "LogPath": "/var/lib/docker/containers/9fc15dbb1b0abf2c8c465291568d03010a5cd699b548062f138e8f27aea60bc1/9fc15dbb1b0abf2c8c465291568d03010a5cd699b548062f138e8f27aea60bc1-json.log",
	        "Name": "/addons-523976",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-523976:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-523976",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9fc15dbb1b0abf2c8c465291568d03010a5cd699b548062f138e8f27aea60bc1",
	                "LowerDir": "/var/lib/docker/overlay2/1266035770b017aa847ebb80f1c0de1e645922080c1edf8222d76dc66700b3a0-init/diff:/var/lib/docker/overlay2/56244b4f60319ba406f5ebe406da3f45bd5764c167111669ae2d84794cf94518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1266035770b017aa847ebb80f1c0de1e645922080c1edf8222d76dc66700b3a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1266035770b017aa847ebb80f1c0de1e645922080c1edf8222d76dc66700b3a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1266035770b017aa847ebb80f1c0de1e645922080c1edf8222d76dc66700b3a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-523976",
	                "Source": "/var/lib/docker/volumes/addons-523976/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-523976",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-523976",
	                "name.minikube.sigs.k8s.io": "addons-523976",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f27cd7116b2f7b226bc58fd2974beb86d7d23d60a1c9828b992a93e933600536",
	            "SandboxKey": "/var/run/docker/netns/f27cd7116b2f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-523976": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:d6:a5:d7:54:0b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1bc5fb69b22cf2a4fbdd9de449d489f38af903ee1ee0d6eb29d9ffd0fa06e1ba",
	                    "EndpointID": "6f07d95b66d1649dba8c31fa7de1fb050d190fbd03a382942539f4b12c117ce5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-523976",
	                        "9fc15dbb1b0a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-523976 -n addons-523976
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-523976 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-523976 logs -n 25: (1.529322246s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-545529                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-545529 │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ start   │ --download-only -p binary-mirror-490963 --alsologtostderr --binary-mirror http://127.0.0.1:46207 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-490963   │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │                     │
	│ delete  │ -p binary-mirror-490963                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-490963   │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ addons  │ enable dashboard -p addons-523976                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-523976          │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │                     │
	│ addons  │ disable dashboard -p addons-523976                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-523976          │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │                     │
	│ start   │ -p addons-523976 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-523976          │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:35 UTC │
	│ addons  │ addons-523976 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-523976          │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ addons  │ addons-523976 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-523976          │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ addons  │ addons-523976 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-523976          │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ addons  │ addons-523976 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-523976          │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ ip      │ addons-523976 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-523976          │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ addons  │ addons-523976 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-523976          │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ ssh     │ addons-523976 ssh cat /opt/local-path-provisioner/pvc-e0338399-28dc-478f-89a3-735d9bdcfa58_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-523976          │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ addons  │ addons-523976 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-523976          │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ addons  │ addons-523976 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-523976          │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ addons  │ enable headlamp -p addons-523976 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-523976          │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ addons  │ addons-523976 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-523976          │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │                     │
	│ addons  │ addons-523976 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-523976          │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │                     │
	│ addons  │ addons-523976 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-523976          │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │                     │
	│ ssh     │ addons-523976 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-523976          │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │                     │
	│ addons  │ addons-523976 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-523976          │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │                     │
	│ addons  │ addons-523976 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-523976          │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-523976                                                                                                                                                                                                                                                                                                                                                                                           │ addons-523976          │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │ 25 Oct 25 09:36 UTC │
	│ addons  │ addons-523976 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-523976          │ jenkins │ v1.37.0 │ 25 Oct 25 09:36 UTC │                     │
	│ ip      │ addons-523976 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-523976          │ jenkins │ v1.37.0 │ 25 Oct 25 09:38 UTC │ 25 Oct 25 09:38 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:32:33
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:32:33.407760  294773 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:32:33.407878  294773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:32:33.407914  294773 out.go:374] Setting ErrFile to fd 2...
	I1025 09:32:33.407925  294773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:32:33.408191  294773 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 09:32:33.408634  294773 out.go:368] Setting JSON to false
	I1025 09:32:33.409430  294773 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4503,"bootTime":1761380250,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1025 09:32:33.409500  294773 start.go:141] virtualization:  
	I1025 09:32:33.412886  294773 out.go:179] * [addons-523976] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 09:32:33.416589  294773 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 09:32:33.416635  294773 notify.go:220] Checking for updates...
	I1025 09:32:33.419618  294773 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:32:33.422520  294773 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 09:32:33.425393  294773 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-292167/.minikube
	I1025 09:32:33.428609  294773 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 09:32:33.431446  294773 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:32:33.434475  294773 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:32:33.468035  294773 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 09:32:33.468164  294773 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:32:33.534040  294773 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-25 09:32:33.5249696 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:32:33.534143  294773 docker.go:318] overlay module found
	I1025 09:32:33.537170  294773 out.go:179] * Using the docker driver based on user configuration
	I1025 09:32:33.540059  294773 start.go:305] selected driver: docker
	I1025 09:32:33.540094  294773 start.go:925] validating driver "docker" against <nil>
	I1025 09:32:33.540108  294773 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:32:33.540852  294773 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:32:33.598380  294773 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-25 09:32:33.588800484 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:32:33.598590  294773 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 09:32:33.598953  294773 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:32:33.602022  294773 out.go:179] * Using Docker driver with root privileges
	I1025 09:32:33.604991  294773 cni.go:84] Creating CNI manager for ""
	I1025 09:32:33.605079  294773 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:32:33.605095  294773 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 09:32:33.605179  294773 start.go:349] cluster config:
	{Name:addons-523976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-523976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1025 09:32:33.610210  294773 out.go:179] * Starting "addons-523976" primary control-plane node in "addons-523976" cluster
	I1025 09:32:33.613075  294773 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:32:33.616147  294773 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:32:33.618954  294773 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:32:33.619042  294773 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:32:33.619278  294773 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 09:32:33.619294  294773 cache.go:58] Caching tarball of preloaded images
	I1025 09:32:33.619384  294773 preload.go:233] Found /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 09:32:33.619394  294773 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:32:33.619732  294773 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/config.json ...
	I1025 09:32:33.619752  294773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/config.json: {Name:mkec784ce2da4db8900e08806a3e0bbaa1dadf28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:33.635948  294773 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1025 09:32:33.636108  294773 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1025 09:32:33.636133  294773 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1025 09:32:33.636142  294773 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1025 09:32:33.636151  294773 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1025 09:32:33.636156  294773 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1025 09:32:51.684840  294773 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1025 09:32:51.684878  294773 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:32:51.684908  294773 start.go:360] acquireMachinesLock for addons-523976: {Name:mk120d50a90dba65a5a199c912429594e3c4a035 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:32:51.685044  294773 start.go:364] duration metric: took 117.918µs to acquireMachinesLock for "addons-523976"
	I1025 09:32:51.685070  294773 start.go:93] Provisioning new machine with config: &{Name:addons-523976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-523976 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:32:51.685141  294773 start.go:125] createHost starting for "" (driver="docker")
	I1025 09:32:51.688519  294773 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1025 09:32:51.688762  294773 start.go:159] libmachine.API.Create for "addons-523976" (driver="docker")
	I1025 09:32:51.688813  294773 client.go:168] LocalClient.Create starting
	I1025 09:32:51.688951  294773 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem
	I1025 09:32:52.573194  294773 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem
	I1025 09:32:53.171019  294773 cli_runner.go:164] Run: docker network inspect addons-523976 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 09:32:53.186438  294773 cli_runner.go:211] docker network inspect addons-523976 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 09:32:53.186539  294773 network_create.go:284] running [docker network inspect addons-523976] to gather additional debugging logs...
	I1025 09:32:53.186560  294773 cli_runner.go:164] Run: docker network inspect addons-523976
	W1025 09:32:53.202409  294773 cli_runner.go:211] docker network inspect addons-523976 returned with exit code 1
	I1025 09:32:53.202438  294773 network_create.go:287] error running [docker network inspect addons-523976]: docker network inspect addons-523976: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-523976 not found
	I1025 09:32:53.202450  294773 network_create.go:289] output of [docker network inspect addons-523976]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-523976 not found
	
	** /stderr **
	I1025 09:32:53.202540  294773 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:32:53.218441  294773 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001d12460}
	I1025 09:32:53.218494  294773 network_create.go:124] attempt to create docker network addons-523976 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 09:32:53.218548  294773 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-523976 addons-523976
	I1025 09:32:53.276339  294773 network_create.go:108] docker network addons-523976 192.168.49.0/24 created
	I1025 09:32:53.276375  294773 kic.go:121] calculated static IP "192.168.49.2" for the "addons-523976" container
	I1025 09:32:53.276448  294773 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 09:32:53.291670  294773 cli_runner.go:164] Run: docker volume create addons-523976 --label name.minikube.sigs.k8s.io=addons-523976 --label created_by.minikube.sigs.k8s.io=true
	I1025 09:32:53.313797  294773 oci.go:103] Successfully created a docker volume addons-523976
	I1025 09:32:53.313894  294773 cli_runner.go:164] Run: docker run --rm --name addons-523976-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-523976 --entrypoint /usr/bin/test -v addons-523976:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 09:32:55.423346  294773 cli_runner.go:217] Completed: docker run --rm --name addons-523976-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-523976 --entrypoint /usr/bin/test -v addons-523976:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (2.109397199s)
	I1025 09:32:55.423382  294773 oci.go:107] Successfully prepared a docker volume addons-523976
	I1025 09:32:55.423409  294773 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:32:55.423428  294773 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 09:32:55.423493  294773 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-523976:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 09:32:59.844161  294773 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-523976:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.420614087s)
	I1025 09:32:59.844198  294773 kic.go:203] duration metric: took 4.420764111s to extract preloaded images to volume ...
	W1025 09:32:59.844327  294773 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1025 09:32:59.844444  294773 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 09:32:59.896655  294773 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-523976 --name addons-523976 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-523976 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-523976 --network addons-523976 --ip 192.168.49.2 --volume addons-523976:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 09:33:00.427099  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Running}}
	I1025 09:33:00.455362  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:00.483255  294773 cli_runner.go:164] Run: docker exec addons-523976 stat /var/lib/dpkg/alternatives/iptables
	I1025 09:33:00.534519  294773 oci.go:144] the created container "addons-523976" has a running status.
	I1025 09:33:00.534550  294773 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa...
	I1025 09:33:00.778871  294773 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 09:33:00.804947  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:00.831583  294773 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 09:33:00.831660  294773 kic_runner.go:114] Args: [docker exec --privileged addons-523976 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 09:33:00.900382  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:00.932799  294773 machine.go:93] provisionDockerMachine start ...
	I1025 09:33:00.932893  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:00.954722  294773 main.go:141] libmachine: Using SSH client type: native
	I1025 09:33:00.955056  294773 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33142 <nil> <nil>}
	I1025 09:33:00.955067  294773 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:33:00.955664  294773 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50230->127.0.0.1:33142: read: connection reset by peer
	I1025 09:33:04.107581  294773 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-523976
	
	I1025 09:33:04.107669  294773 ubuntu.go:182] provisioning hostname "addons-523976"
	I1025 09:33:04.107762  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:04.125551  294773 main.go:141] libmachine: Using SSH client type: native
	I1025 09:33:04.125853  294773 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33142 <nil> <nil>}
	I1025 09:33:04.125867  294773 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-523976 && echo "addons-523976" | sudo tee /etc/hostname
	I1025 09:33:04.284352  294773 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-523976
	
	I1025 09:33:04.284478  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:04.301352  294773 main.go:141] libmachine: Using SSH client type: native
	I1025 09:33:04.301678  294773 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33142 <nil> <nil>}
	I1025 09:33:04.301696  294773 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-523976' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-523976/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-523976' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:33:04.451323  294773 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:33:04.451351  294773 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-292167/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-292167/.minikube}
	I1025 09:33:04.451380  294773 ubuntu.go:190] setting up certificates
	I1025 09:33:04.451391  294773 provision.go:84] configureAuth start
	I1025 09:33:04.451452  294773 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-523976
	I1025 09:33:04.467962  294773 provision.go:143] copyHostCerts
	I1025 09:33:04.468043  294773 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem (1123 bytes)
	I1025 09:33:04.468166  294773 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem (1675 bytes)
	I1025 09:33:04.468269  294773 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem (1082 bytes)
	I1025 09:33:04.468322  294773 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem org=jenkins.addons-523976 san=[127.0.0.1 192.168.49.2 addons-523976 localhost minikube]
	I1025 09:33:05.341230  294773 provision.go:177] copyRemoteCerts
	I1025 09:33:05.341302  294773 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:33:05.341344  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:05.358121  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:05.462894  294773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1025 09:33:05.480320  294773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:33:05.498197  294773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 09:33:05.515858  294773 provision.go:87] duration metric: took 1.0644412s to configureAuth
	I1025 09:33:05.515887  294773 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:33:05.516078  294773 config.go:182] Loaded profile config "addons-523976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:33:05.516191  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:05.532954  294773 main.go:141] libmachine: Using SSH client type: native
	I1025 09:33:05.533258  294773 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33142 <nil> <nil>}
	I1025 09:33:05.533279  294773 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:33:05.785106  294773 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:33:05.785131  294773 machine.go:96] duration metric: took 4.852312132s to provisionDockerMachine
	I1025 09:33:05.785143  294773 client.go:171] duration metric: took 14.096316457s to LocalClient.Create
	I1025 09:33:05.785156  294773 start.go:167] duration metric: took 14.096396015s to libmachine.API.Create "addons-523976"
	I1025 09:33:05.785163  294773 start.go:293] postStartSetup for "addons-523976" (driver="docker")
	I1025 09:33:05.785174  294773 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:33:05.785237  294773 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:33:05.785279  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:05.802590  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:05.907136  294773 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:33:05.910420  294773 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:33:05.910447  294773 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:33:05.910459  294773 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/addons for local assets ...
	I1025 09:33:05.910527  294773 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/files for local assets ...
	I1025 09:33:05.910553  294773 start.go:296] duration metric: took 125.384621ms for postStartSetup
	I1025 09:33:05.910865  294773 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-523976
	I1025 09:33:05.927404  294773 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/config.json ...
	I1025 09:33:05.927698  294773 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:33:05.927748  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:05.944087  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:06.044161  294773 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:33:06.048779  294773 start.go:128] duration metric: took 14.363623028s to createHost
	I1025 09:33:06.048802  294773 start.go:83] releasing machines lock for "addons-523976", held for 14.363749085s
	I1025 09:33:06.048876  294773 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-523976
	I1025 09:33:06.065552  294773 ssh_runner.go:195] Run: cat /version.json
	I1025 09:33:06.065607  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:06.065864  294773 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:33:06.065928  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:06.083908  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:06.085086  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:06.186790  294773 ssh_runner.go:195] Run: systemctl --version
	I1025 09:33:06.278512  294773 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:33:06.313895  294773 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:33:06.318084  294773 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:33:06.318158  294773 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:33:06.348132  294773 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1025 09:33:06.348210  294773 start.go:495] detecting cgroup driver to use...
	I1025 09:33:06.348262  294773 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 09:33:06.348379  294773 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:33:06.367411  294773 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:33:06.380270  294773 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:33:06.380337  294773 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:33:06.400378  294773 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:33:06.419143  294773 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:33:06.563069  294773 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:33:06.689621  294773 docker.go:234] disabling docker service ...
	I1025 09:33:06.689689  294773 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:33:06.710782  294773 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:33:06.723655  294773 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:33:06.842525  294773 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:33:06.963220  294773 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:33:06.977326  294773 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:33:06.991029  294773 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:33:06.991099  294773 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:33:07.000726  294773 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 09:33:07.000872  294773 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:33:07.011231  294773 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:33:07.022655  294773 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:33:07.032253  294773 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:33:07.040240  294773 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:33:07.049524  294773 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:33:07.063350  294773 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:33:07.072272  294773 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:33:07.079456  294773 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:33:07.086701  294773 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:33:07.195104  294773 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:33:07.318679  294773 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:33:07.318806  294773 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:33:07.322649  294773 start.go:563] Will wait 60s for crictl version
	I1025 09:33:07.322761  294773 ssh_runner.go:195] Run: which crictl
	I1025 09:33:07.326322  294773 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:33:07.362113  294773 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:33:07.362235  294773 ssh_runner.go:195] Run: crio --version
	I1025 09:33:07.390496  294773 ssh_runner.go:195] Run: crio --version
	I1025 09:33:07.424573  294773 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:33:07.427512  294773 cli_runner.go:164] Run: docker network inspect addons-523976 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:33:07.442659  294773 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1025 09:33:07.446413  294773 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:33:07.455846  294773 kubeadm.go:883] updating cluster {Name:addons-523976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-523976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:33:07.455970  294773 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:33:07.456031  294773 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:33:07.492027  294773 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:33:07.492050  294773 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:33:07.492106  294773 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:33:07.518291  294773 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:33:07.518316  294773 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:33:07.518325  294773 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1025 09:33:07.518430  294773 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-523976 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-523976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:33:07.518538  294773 ssh_runner.go:195] Run: crio config
	I1025 09:33:07.591047  294773 cni.go:84] Creating CNI manager for ""
	I1025 09:33:07.591072  294773 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:33:07.591092  294773 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:33:07.591115  294773 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-523976 NodeName:addons-523976 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:33:07.591251  294773 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-523976"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:33:07.591328  294773 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:33:07.598666  294773 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:33:07.598752  294773 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:33:07.605832  294773 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1025 09:33:07.617698  294773 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:33:07.630366  294773 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1025 09:33:07.642602  294773 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:33:07.646028  294773 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:33:07.655321  294773 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:33:07.775319  294773 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:33:07.795555  294773 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976 for IP: 192.168.49.2
	I1025 09:33:07.795622  294773 certs.go:195] generating shared ca certs ...
	I1025 09:33:07.795652  294773 certs.go:227] acquiring lock for ca certs: {Name:mk9438893ca511bbb3aa6154ee7b6f94b409696d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:33:07.796480  294773 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key
	I1025 09:33:08.161277  294773 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt ...
	I1025 09:33:08.161310  294773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt: {Name:mk790b2054fd2159ff24102bbc4a2b5c8a42b58f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:33:08.161548  294773 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key ...
	I1025 09:33:08.161564  294773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key: {Name:mkff04b43f00f5d3a44d154a58f9755924430f8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:33:08.161665  294773 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key
	I1025 09:33:08.439687  294773 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.crt ...
	I1025 09:33:08.439721  294773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.crt: {Name:mk99e8d68bb4e95b72f461ca6eaf7608c70c4c83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:33:08.439950  294773 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key ...
	I1025 09:33:08.439965  294773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key: {Name:mk49783463ce9c968396ea7320bf74f172bc8b6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:33:08.440642  294773 certs.go:257] generating profile certs ...
	I1025 09:33:08.440706  294773 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.key
	I1025 09:33:08.440724  294773 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.crt with IP's: []
	I1025 09:33:08.840504  294773 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.crt ...
	I1025 09:33:08.840540  294773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.crt: {Name:mka12a7197be577f0d247ec5e33034f94ec73765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:33:08.840741  294773 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.key ...
	I1025 09:33:08.840761  294773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.key: {Name:mka6fbf7422568b37cdf3ecd55d9d8bfbec3244b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:33:08.840876  294773 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/apiserver.key.e0e61d55
	I1025 09:33:08.840897  294773 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/apiserver.crt.e0e61d55 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1025 09:33:09.097293  294773 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/apiserver.crt.e0e61d55 ...
	I1025 09:33:09.097326  294773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/apiserver.crt.e0e61d55: {Name:mk9a82f852e7d9dff3c571e77d2147925f4263e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:33:09.098114  294773 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/apiserver.key.e0e61d55 ...
	I1025 09:33:09.098132  294773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/apiserver.key.e0e61d55: {Name:mk0015c9c86117cd916ff5bbcaf915901a07d7fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:33:09.098754  294773 certs.go:382] copying /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/apiserver.crt.e0e61d55 -> /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/apiserver.crt
	I1025 09:33:09.098854  294773 certs.go:386] copying /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/apiserver.key.e0e61d55 -> /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/apiserver.key
	I1025 09:33:09.098908  294773 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/proxy-client.key
	I1025 09:33:09.098931  294773 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/proxy-client.crt with IP's: []
	I1025 09:33:09.378501  294773 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/proxy-client.crt ...
	I1025 09:33:09.378533  294773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/proxy-client.crt: {Name:mkf2a022fd313ab9805f4455106606739edd2a84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:33:09.379348  294773 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/proxy-client.key ...
	I1025 09:33:09.379372  294773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/proxy-client.key: {Name:mkda3efe9771584f87be7ec433282a34292efc86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:33:09.380157  294773 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 09:33:09.380234  294773 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem (1082 bytes)
	I1025 09:33:09.380278  294773 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:33:09.380305  294773 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem (1675 bytes)
	I1025 09:33:09.380900  294773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:33:09.398481  294773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 09:33:09.417274  294773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:33:09.436078  294773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:33:09.453766  294773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1025 09:33:09.470953  294773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 09:33:09.488130  294773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:33:09.505544  294773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:33:09.522860  294773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:33:09.540519  294773 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:33:09.553392  294773 ssh_runner.go:195] Run: openssl version
	I1025 09:33:09.560754  294773 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:33:09.569438  294773 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:33:09.573257  294773 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:33:09.573324  294773 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:33:09.614754  294773 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:33:09.623192  294773 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:33:09.626674  294773 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 09:33:09.626766  294773 kubeadm.go:400] StartCluster: {Name:addons-523976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-523976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:33:09.626843  294773 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:33:09.626901  294773 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:33:09.653582  294773 cri.go:89] found id: ""
	I1025 09:33:09.653663  294773 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:33:09.661255  294773 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 09:33:09.668722  294773 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 09:33:09.668832  294773 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 09:33:09.676595  294773 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 09:33:09.676616  294773 kubeadm.go:157] found existing configuration files:
	
	I1025 09:33:09.676670  294773 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 09:33:09.684094  294773 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 09:33:09.684179  294773 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 09:33:09.691410  294773 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 09:33:09.699015  294773 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 09:33:09.699081  294773 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 09:33:09.706003  294773 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 09:33:09.714205  294773 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 09:33:09.714296  294773 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 09:33:09.722043  294773 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 09:33:09.729836  294773 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 09:33:09.729943  294773 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 09:33:09.737074  294773 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 09:33:09.805217  294773 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1025 09:33:09.805466  294773 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1025 09:33:09.875725  294773 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 09:33:26.930017  294773 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 09:33:26.930081  294773 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 09:33:26.930175  294773 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 09:33:26.930237  294773 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1025 09:33:26.930279  294773 kubeadm.go:318] OS: Linux
	I1025 09:33:26.930330  294773 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 09:33:26.930384  294773 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1025 09:33:26.930437  294773 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 09:33:26.930490  294773 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 09:33:26.930544  294773 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 09:33:26.930597  294773 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 09:33:26.930647  294773 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 09:33:26.930700  294773 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 09:33:26.930751  294773 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1025 09:33:26.930829  294773 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 09:33:26.930930  294773 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 09:33:26.931026  294773 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 09:33:26.931093  294773 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 09:33:26.934032  294773 out.go:252]   - Generating certificates and keys ...
	I1025 09:33:26.934126  294773 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 09:33:26.934193  294773 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 09:33:26.934259  294773 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 09:33:26.934316  294773 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 09:33:26.934376  294773 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 09:33:26.934426  294773 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 09:33:26.934480  294773 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 09:33:26.934596  294773 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-523976 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 09:33:26.934648  294773 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 09:33:26.934763  294773 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-523976 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 09:33:26.934843  294773 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 09:33:26.934907  294773 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 09:33:26.934951  294773 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 09:33:26.935006  294773 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 09:33:26.935057  294773 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 09:33:26.935114  294773 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 09:33:26.935241  294773 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 09:33:26.935306  294773 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 09:33:26.935361  294773 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 09:33:26.935450  294773 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 09:33:26.935520  294773 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 09:33:26.940407  294773 out.go:252]   - Booting up control plane ...
	I1025 09:33:26.940582  294773 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 09:33:26.940745  294773 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 09:33:26.940832  294773 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 09:33:26.940970  294773 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 09:33:26.941089  294773 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 09:33:26.941202  294773 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 09:33:26.941298  294773 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 09:33:26.941341  294773 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 09:33:26.941514  294773 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 09:33:26.941647  294773 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 09:33:26.941735  294773 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 510.038928ms
	I1025 09:33:26.941862  294773 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 09:33:26.941986  294773 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1025 09:33:26.942111  294773 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 09:33:26.942231  294773 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 09:33:26.942335  294773 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.127628818s
	I1025 09:33:26.942424  294773 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.57127402s
	I1025 09:33:26.942501  294773 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.00151284s
	I1025 09:33:26.942611  294773 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 09:33:26.942778  294773 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 09:33:26.942884  294773 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 09:33:26.943091  294773 kubeadm.go:318] [mark-control-plane] Marking the node addons-523976 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 09:33:26.943190  294773 kubeadm.go:318] [bootstrap-token] Using token: 866dt2.1n9azi2o7n2cpdcp
	I1025 09:33:26.948198  294773 out.go:252]   - Configuring RBAC rules ...
	I1025 09:33:26.948374  294773 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 09:33:26.948499  294773 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 09:33:26.948667  294773 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 09:33:26.948814  294773 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 09:33:26.948943  294773 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 09:33:26.949054  294773 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 09:33:26.949208  294773 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 09:33:26.949271  294773 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 09:33:26.949343  294773 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 09:33:26.949385  294773 kubeadm.go:318] 
	I1025 09:33:26.949468  294773 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 09:33:26.949476  294773 kubeadm.go:318] 
	I1025 09:33:26.949559  294773 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 09:33:26.949572  294773 kubeadm.go:318] 
	I1025 09:33:26.949598  294773 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 09:33:26.949676  294773 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 09:33:26.949736  294773 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 09:33:26.949746  294773 kubeadm.go:318] 
	I1025 09:33:26.949816  294773 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 09:33:26.949825  294773 kubeadm.go:318] 
	I1025 09:33:26.949873  294773 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 09:33:26.949879  294773 kubeadm.go:318] 
	I1025 09:33:26.949951  294773 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 09:33:26.950070  294773 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 09:33:26.950154  294773 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 09:33:26.950163  294773 kubeadm.go:318] 
	I1025 09:33:26.950261  294773 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 09:33:26.950364  294773 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 09:33:26.950379  294773 kubeadm.go:318] 
	I1025 09:33:26.950490  294773 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 866dt2.1n9azi2o7n2cpdcp \
	I1025 09:33:26.950613  294773 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3fbde57ac93e639221cdfe118779e4c6254e9915ac4b07aafbe95a9f86bce8d0 \
	I1025 09:33:26.950640  294773 kubeadm.go:318] 	--control-plane 
	I1025 09:33:26.950647  294773 kubeadm.go:318] 
	I1025 09:33:26.950749  294773 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 09:33:26.950764  294773 kubeadm.go:318] 
	I1025 09:33:26.950867  294773 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 866dt2.1n9azi2o7n2cpdcp \
	I1025 09:33:26.950994  294773 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3fbde57ac93e639221cdfe118779e4c6254e9915ac4b07aafbe95a9f86bce8d0 
	I1025 09:33:26.951019  294773 cni.go:84] Creating CNI manager for ""
	I1025 09:33:26.951031  294773 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:33:26.956042  294773 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 09:33:26.959839  294773 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 09:33:26.964240  294773 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 09:33:26.964267  294773 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 09:33:26.979082  294773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 09:33:27.268927  294773 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 09:33:27.269079  294773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:27.269162  294773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-523976 minikube.k8s.io/updated_at=2025_10_25T09_33_27_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53 minikube.k8s.io/name=addons-523976 minikube.k8s.io/primary=true
	I1025 09:33:27.441354  294773 ops.go:34] apiserver oom_adj: -16
	I1025 09:33:27.448449  294773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:27.948520  294773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:28.449204  294773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:28.949248  294773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:29.449343  294773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:29.948564  294773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:30.448918  294773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:30.949143  294773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:31.449081  294773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:31.949222  294773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:32.053363  294773 kubeadm.go:1113] duration metric: took 4.784329245s to wait for elevateKubeSystemPrivileges
	I1025 09:33:32.053397  294773 kubeadm.go:402] duration metric: took 22.426635075s to StartCluster
	I1025 09:33:32.053426  294773 settings.go:142] acquiring lock: {Name:mkea67a33832f2ea491a4c745ccd836174c61655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:33:32.053563  294773 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 09:33:32.053961  294773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/kubeconfig: {Name:mk3dba620aff9c31a3afd43d9db4d9ff5be75367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:33:32.054166  294773 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:33:32.054320  294773 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 09:33:32.054566  294773 config.go:182] Loaded profile config "addons-523976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:33:32.054609  294773 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1025 09:33:32.054699  294773 addons.go:69] Setting yakd=true in profile "addons-523976"
	I1025 09:33:32.054717  294773 addons.go:238] Setting addon yakd=true in "addons-523976"
	I1025 09:33:32.054739  294773 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:33:32.055269  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:32.055768  294773 addons.go:69] Setting metrics-server=true in profile "addons-523976"
	I1025 09:33:32.055793  294773 addons.go:238] Setting addon metrics-server=true in "addons-523976"
	I1025 09:33:32.055821  294773 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:33:32.056219  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:32.056356  294773 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-523976"
	I1025 09:33:32.056394  294773 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-523976"
	I1025 09:33:32.056474  294773 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:33:32.056906  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:32.058473  294773 addons.go:69] Setting registry=true in profile "addons-523976"
	I1025 09:33:32.058530  294773 addons.go:238] Setting addon registry=true in "addons-523976"
	I1025 09:33:32.058575  294773 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:33:32.059015  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:32.059548  294773 addons.go:69] Setting registry-creds=true in profile "addons-523976"
	I1025 09:33:32.059568  294773 addons.go:238] Setting addon registry-creds=true in "addons-523976"
	I1025 09:33:32.059589  294773 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:33:32.059970  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:32.060110  294773 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-523976"
	I1025 09:33:32.060126  294773 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-523976"
	I1025 09:33:32.060145  294773 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:33:32.060517  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:32.072462  294773 addons.go:69] Setting cloud-spanner=true in profile "addons-523976"
	I1025 09:33:32.072548  294773 addons.go:238] Setting addon cloud-spanner=true in "addons-523976"
	I1025 09:33:32.072625  294773 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:33:32.073143  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:32.093069  294773 addons.go:69] Setting storage-provisioner=true in profile "addons-523976"
	I1025 09:33:32.093105  294773 addons.go:238] Setting addon storage-provisioner=true in "addons-523976"
	I1025 09:33:32.093141  294773 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:33:32.093708  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:32.097481  294773 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-523976"
	I1025 09:33:32.097521  294773 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-523976"
	I1025 09:33:32.097913  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:32.098593  294773 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-523976"
	I1025 09:33:32.098682  294773 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-523976"
	I1025 09:33:32.098750  294773 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:33:32.100860  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:32.111344  294773 addons.go:69] Setting volcano=true in profile "addons-523976"
	I1025 09:33:32.111382  294773 addons.go:238] Setting addon volcano=true in "addons-523976"
	I1025 09:33:32.111425  294773 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:33:32.111987  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:32.115220  294773 addons.go:69] Setting default-storageclass=true in profile "addons-523976"
	I1025 09:33:32.115255  294773 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-523976"
	I1025 09:33:32.115641  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:32.127288  294773 addons.go:69] Setting volumesnapshots=true in profile "addons-523976"
	I1025 09:33:32.127334  294773 addons.go:238] Setting addon volumesnapshots=true in "addons-523976"
	I1025 09:33:32.127384  294773 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:33:32.127971  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:32.143305  294773 out.go:179] * Verifying Kubernetes components...
	I1025 09:33:32.144576  294773 addons.go:69] Setting gcp-auth=true in profile "addons-523976"
	I1025 09:33:32.144616  294773 mustload.go:65] Loading cluster: addons-523976
	I1025 09:33:32.144997  294773 config.go:182] Loaded profile config "addons-523976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:33:32.145309  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:32.146920  294773 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:33:32.166473  294773 addons.go:69] Setting ingress=true in profile "addons-523976"
	I1025 09:33:32.166512  294773 addons.go:238] Setting addon ingress=true in "addons-523976"
	I1025 09:33:32.166558  294773 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:33:32.167029  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:32.202615  294773 addons.go:69] Setting ingress-dns=true in profile "addons-523976"
	I1025 09:33:32.202645  294773 addons.go:238] Setting addon ingress-dns=true in "addons-523976"
	I1025 09:33:32.202688  294773 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:33:32.203236  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:32.217561  294773 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1025 09:33:32.218418  294773 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1025 09:33:32.232166  294773 addons.go:69] Setting inspektor-gadget=true in profile "addons-523976"
	I1025 09:33:32.232197  294773 addons.go:238] Setting addon inspektor-gadget=true in "addons-523976"
	I1025 09:33:32.232234  294773 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:33:32.232695  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:32.245517  294773 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1025 09:33:32.250003  294773 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 09:33:32.250024  294773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1025 09:33:32.250085  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:32.250381  294773 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1025 09:33:32.256525  294773 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:33:32.259097  294773 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1025 09:33:32.259117  294773 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1025 09:33:32.259183  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:32.289582  294773 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1025 09:33:32.290454  294773 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-523976"
	I1025 09:33:32.295255  294773 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:33:32.295696  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:32.302437  294773 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:33:32.302741  294773 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1025 09:33:32.302925  294773 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1025 09:33:32.302969  294773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1025 09:33:32.303059  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:32.290547  294773 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1025 09:33:32.331291  294773 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:33:32.331309  294773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:33:32.331452  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:32.290581  294773 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1025 09:33:32.331639  294773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1025 09:33:32.331687  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:32.352832  294773 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1025 09:33:32.352856  294773 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1025 09:33:32.352919  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:32.290585  294773 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1025 09:33:32.361202  294773 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1025 09:33:32.361229  294773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1025 09:33:32.361300  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	W1025 09:33:32.386808  294773 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1025 09:33:32.387138  294773 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1025 09:33:32.387197  294773 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1025 09:33:32.387293  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:32.410055  294773 out.go:179]   - Using image docker.io/registry:3.0.0
	I1025 09:33:32.412958  294773 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1025 09:33:32.412980  294773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1025 09:33:32.413054  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:32.432847  294773 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1025 09:33:32.443257  294773 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1025 09:33:32.444620  294773 addons.go:238] Setting addon default-storageclass=true in "addons-523976"
	I1025 09:33:32.444655  294773 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:33:32.445047  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:32.456444  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:32.490821  294773 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1025 09:33:32.494851  294773 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1025 09:33:32.495185  294773 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1025 09:33:32.497167  294773 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1025 09:33:32.497252  294773 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1025 09:33:32.508885  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:32.520285  294773 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1025 09:33:32.520459  294773 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1025 09:33:32.525484  294773 out.go:179]   - Using image docker.io/busybox:stable
	I1025 09:33:32.525683  294773 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1025 09:33:32.525707  294773 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1025 09:33:32.525776  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:32.525959  294773 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 09:33:32.528382  294773 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1025 09:33:32.528508  294773 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 09:33:32.529573  294773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1025 09:33:32.529647  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:32.544121  294773 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 09:33:32.547781  294773 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 09:33:32.547804  294773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1025 09:33:32.547870  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:32.568589  294773 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 09:33:32.568613  294773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1025 09:33:32.568681  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:32.579772  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:32.580575  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:32.581725  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:32.584281  294773 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1025 09:33:32.587239  294773 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1025 09:33:32.590377  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:32.594638  294773 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1025 09:33:32.594662  294773 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1025 09:33:32.594729  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:32.626207  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:32.626203  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:32.646016  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:32.692261  294773 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:33:32.692284  294773 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:33:32.692346  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:32.692570  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:32.726448  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:32.739888  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	W1025 09:33:32.743812  294773 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1025 09:33:32.743865  294773 retry.go:31] will retry after 206.867614ms: ssh: handshake failed: EOF
	I1025 09:33:32.752522  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:32.757258  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:32.758152  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	W1025 09:33:32.759626  294773 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1025 09:33:32.759648  294773 retry.go:31] will retry after 206.364231ms: ssh: handshake failed: EOF
	I1025 09:33:32.812832  294773 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:33:32.813017  294773 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	W1025 09:33:32.967705  294773 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1025 09:33:32.967779  294773 retry.go:31] will retry after 334.533988ms: ssh: handshake failed: EOF
	I1025 09:33:33.080232  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:33:33.113535  294773 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1025 09:33:33.113560  294773 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1025 09:33:33.153948  294773 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1025 09:33:33.153969  294773 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1025 09:33:33.164358  294773 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1025 09:33:33.164382  294773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1025 09:33:33.206374  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1025 09:33:33.252061  294773 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1025 09:33:33.252127  294773 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1025 09:33:33.253837  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 09:33:33.272028  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 09:33:33.289779  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1025 09:33:33.299859  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1025 09:33:33.300139  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 09:33:33.316538  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1025 09:33:33.337565  294773 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1025 09:33:33.337593  294773 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1025 09:33:33.366719  294773 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1025 09:33:33.366743  294773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1025 09:33:33.377174  294773 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1025 09:33:33.377199  294773 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1025 09:33:33.380190  294773 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:33.380212  294773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1025 09:33:33.395105  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:33:33.402997  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 09:33:33.523021  294773 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1025 09:33:33.523095  294773 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1025 09:33:33.575864  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:33.579948  294773 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1025 09:33:33.580011  294773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1025 09:33:33.610056  294773 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1025 09:33:33.610129  294773 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1025 09:33:33.740012  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1025 09:33:33.798316  294773 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1025 09:33:33.798389  294773 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1025 09:33:33.850858  294773 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 09:33:33.850937  294773 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1025 09:33:33.929625  294773 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1025 09:33:33.929696  294773 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1025 09:33:33.957569  294773 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1025 09:33:33.957644  294773 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1025 09:33:34.014740  294773 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.201690818s)
	I1025 09:33:34.014891  294773 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1025 09:33:34.014838  294773 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.201976425s)
	I1025 09:33:34.015744  294773 node_ready.go:35] waiting up to 6m0s for node "addons-523976" to be "Ready" ...
	I1025 09:33:34.035099  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 09:33:34.212521  294773 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1025 09:33:34.212593  294773 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1025 09:33:34.276532  294773 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 09:33:34.276604  294773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1025 09:33:34.519881  294773 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-523976" context rescaled to 1 replicas
	I1025 09:33:34.525590  294773 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1025 09:33:34.525668  294773 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1025 09:33:34.549903  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 09:33:34.641385  294773 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1025 09:33:34.641455  294773 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1025 09:33:34.893015  294773 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1025 09:33:34.893039  294773 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1025 09:33:35.023819  294773 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1025 09:33:35.023840  294773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1025 09:33:35.240754  294773 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1025 09:33:35.240835  294773 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1025 09:33:35.271708  294773 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1025 09:33:35.271783  294773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1025 09:33:35.286504  294773 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1025 09:33:35.286574  294773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1025 09:33:35.509041  294773 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1025 09:33:35.509120  294773 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1025 09:33:35.726811  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1025 09:33:36.035834  294773 node_ready.go:57] node "addons-523976" has "Ready":"False" status (will retry)
	I1025 09:33:36.748844  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.542382309s)
	I1025 09:33:36.748983  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.668724937s)
	I1025 09:33:37.911265  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.657355025s)
	I1025 09:33:37.911450  294773 addons.go:479] Verifying addon ingress=true in "addons-523976"
	I1025 09:33:37.911477  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.611529219s)
	I1025 09:33:37.911528  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.611350436s)
	I1025 09:33:37.911557  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.594998105s)
	I1025 09:33:37.911590  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.516463867s)
	I1025 09:33:37.911815  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.508791988s)
	I1025 09:33:37.911373  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.639246674s)
	I1025 09:33:37.911967  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.336025319s)
	W1025 09:33:37.911994  294773 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:37.912011  294773 retry.go:31] will retry after 141.929822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:37.912053  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.171961658s)
	I1025 09:33:37.911427  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.621578043s)
	I1025 09:33:37.912161  294773 addons.go:479] Verifying addon registry=true in "addons-523976"
	I1025 09:33:37.912283  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.87707482s)
	I1025 09:33:37.914308  294773 addons.go:479] Verifying addon metrics-server=true in "addons-523976"
	I1025 09:33:37.912363  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.362388455s)
	W1025 09:33:37.914359  294773 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 09:33:37.914380  294773 retry.go:31] will retry after 341.272991ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 09:33:37.915228  294773 out.go:179] * Verifying ingress addon...
	I1025 09:33:37.915234  294773 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-523976 service yakd-dashboard -n yakd-dashboard
	
	I1025 09:33:37.917112  294773 out.go:179] * Verifying registry addon...
	I1025 09:33:37.919937  294773 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1025 09:33:37.919937  294773 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1025 09:33:37.936977  294773 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1025 09:33:37.936998  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:37.937520  294773 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1025 09:33:37.937534  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:33:37.954997  294773 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1025 09:33:38.054900  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:38.256243  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 09:33:38.436650  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:38.437057  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:38.469256  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.742354393s)
	I1025 09:33:38.469337  294773 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-523976"
	I1025 09:33:38.472387  294773 out.go:179] * Verifying csi-hostpath-driver addon...
	I1025 09:33:38.476035  294773 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1025 09:33:38.480369  294773 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1025 09:33:38.480436  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:38.522219  294773 node_ready.go:57] node "addons-523976" has "Ready":"False" status (will retry)
	I1025 09:33:38.941258  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:38.941476  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:38.980002  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:39.224286  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.169295623s)
	W1025 09:33:39.224335  294773 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:39.224355  294773 retry.go:31] will retry after 328.381467ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:39.424080  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:39.424345  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:39.523966  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:39.552959  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:39.868637  294773 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1025 09:33:39.868730  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:39.893891  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:39.926277  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:39.926871  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:39.979675  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:40.023526  294773 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1025 09:33:40.049392  294773 addons.go:238] Setting addon gcp-auth=true in "addons-523976"
	I1025 09:33:40.049525  294773 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:33:40.050073  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:40.072928  294773 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1025 09:33:40.072986  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:40.093942  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:40.424213  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:40.424913  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:40.479775  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:40.923749  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:40.924273  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:40.981345  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:41.019253  294773 node_ready.go:57] node "addons-523976" has "Ready":"False" status (will retry)
	I1025 09:33:41.322567  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.066210094s)
	I1025 09:33:41.322607  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.769611433s)
	I1025 09:33:41.322667  294773 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.24971705s)
	W1025 09:33:41.322677  294773 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:41.322739  294773 retry.go:31] will retry after 529.604297ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:41.325986  294773 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 09:33:41.328817  294773 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1025 09:33:41.331707  294773 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1025 09:33:41.331734  294773 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1025 09:33:41.345731  294773 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1025 09:33:41.345808  294773 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1025 09:33:41.360431  294773 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 09:33:41.360454  294773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1025 09:33:41.379918  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 09:33:41.425113  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:41.425830  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:41.479742  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:41.851278  294773 addons.go:479] Verifying addon gcp-auth=true in "addons-523976"
	I1025 09:33:41.852517  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:41.854643  294773 out.go:179] * Verifying gcp-auth addon...
	I1025 09:33:41.858300  294773 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1025 09:33:41.873753  294773 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1025 09:33:41.873773  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:41.970366  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:41.970826  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:41.979702  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:42.362003  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:42.424177  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:42.424930  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:42.479210  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:42.700233  294773 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:42.700263  294773 retry.go:31] will retry after 1.042193162s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:42.861895  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:42.924767  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:42.925102  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:42.979660  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:43.361628  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:43.423957  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:43.424101  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:43.478968  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:43.518656  294773 node_ready.go:57] node "addons-523976" has "Ready":"False" status (will retry)
	I1025 09:33:43.742886  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:43.862351  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:43.924400  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:43.924996  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:43.981196  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:44.362156  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:44.424289  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:44.424550  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:44.478914  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:44.583667  294773 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:44.583751  294773 retry.go:31] will retry after 1.607103469s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:44.861614  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:44.923636  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:44.924067  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:44.979933  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:45.362701  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:45.424124  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:45.424256  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:45.479610  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:45.519690  294773 node_ready.go:57] node "addons-523976" has "Ready":"False" status (will retry)
	I1025 09:33:45.861833  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:45.923851  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:45.924175  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:45.978789  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:46.191967  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:46.361932  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:46.423344  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:46.423699  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:46.479796  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:46.861676  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:46.924512  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:46.924718  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:46.979107  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:46.998042  294773 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:46.998077  294773 retry.go:31] will retry after 2.121529079s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:47.361907  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:47.424011  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:47.424357  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:47.479279  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:47.520373  294773 node_ready.go:57] node "addons-523976" has "Ready":"False" status (will retry)
	I1025 09:33:47.861400  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:47.923168  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:47.923575  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:47.979401  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:48.361218  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:48.423613  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:48.423771  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:48.479812  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:48.861654  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:48.923817  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:48.924241  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:48.978895  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:49.120642  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:49.362169  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:49.424459  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:49.424747  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:49.479918  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:49.862193  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:49.923518  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:49.923998  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:33:49.928377  294773 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:49.928407  294773 retry.go:31] will retry after 2.976947527s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:49.979239  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:50.019359  294773 node_ready.go:57] node "addons-523976" has "Ready":"False" status (will retry)
	I1025 09:33:50.360868  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:50.422926  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:50.423371  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:50.479128  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:50.862064  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:50.923253  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:50.923428  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:50.979186  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:51.361259  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:51.423729  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:51.423855  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:51.479485  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:51.861138  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:51.923001  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:51.923067  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:51.978940  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:52.019493  294773 node_ready.go:57] node "addons-523976" has "Ready":"False" status (will retry)
	I1025 09:33:52.361488  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:52.423198  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:52.423342  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:52.479471  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:52.861007  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:52.906159  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:52.926083  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:52.927049  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:52.979261  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:53.362650  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:53.425461  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:53.425718  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:53.480570  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:53.757648  294773 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:53.757730  294773 retry.go:31] will retry after 2.276492655s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:53.861580  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:53.924088  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:53.924182  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:53.979188  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:54.361743  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:54.423851  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:54.424655  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:54.479778  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:54.519374  294773 node_ready.go:57] node "addons-523976" has "Ready":"False" status (will retry)
	I1025 09:33:54.861725  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:54.924449  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:54.924829  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:54.980891  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:55.361890  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:55.423068  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:55.423337  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:55.479370  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:55.861212  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:55.925103  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:55.925660  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:55.979576  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:56.034848  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:56.361173  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:56.424868  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:56.425230  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:56.479242  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:56.819866  294773 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:56.819896  294773 retry.go:31] will retry after 8.994283387s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:56.862253  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:56.923260  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:56.923294  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:56.979478  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:57.019315  294773 node_ready.go:57] node "addons-523976" has "Ready":"False" status (will retry)
	I1025 09:33:57.361300  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:57.423237  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:57.423749  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:57.479581  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:57.861088  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:57.923687  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:57.923770  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:57.979664  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:58.361534  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:58.423944  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:58.424241  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:58.478892  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:58.861882  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:58.923637  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:58.923894  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:58.979727  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:59.019426  294773 node_ready.go:57] node "addons-523976" has "Ready":"False" status (will retry)
	I1025 09:33:59.373745  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:59.423651  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:59.423964  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:59.479963  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:59.861716  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:59.923910  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:59.924217  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:59.979845  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:00.369320  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:00.424360  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:00.425419  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:00.487592  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:00.861920  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:00.924330  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:00.925103  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:00.980169  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:01.361442  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:01.423843  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:01.424005  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:01.479792  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:34:01.518683  294773 node_ready.go:57] node "addons-523976" has "Ready":"False" status (will retry)
	I1025 09:34:01.863357  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:01.923468  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:01.923857  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:01.979669  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:02.361823  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:02.424395  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:02.424942  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:02.479994  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:02.861772  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:02.924055  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:02.924453  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:02.979658  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:03.361609  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:03.423611  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:03.423900  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:03.481107  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:34:03.519128  294773 node_ready.go:57] node "addons-523976" has "Ready":"False" status (will retry)
	I1025 09:34:03.861004  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:03.923267  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:03.923666  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:03.979436  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:04.361317  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:04.423505  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:04.423912  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:04.479765  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:04.862201  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:04.923608  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:04.923674  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:04.979984  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:05.363353  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:05.424136  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:05.424802  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:05.479618  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:34:05.520346  294773 node_ready.go:57] node "addons-523976" has "Ready":"False" status (will retry)
	I1025 09:34:05.814901  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:34:05.862166  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:05.924381  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:05.924700  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:05.980210  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:06.363355  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:06.423360  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:06.423750  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:06.479690  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:34:06.624889  294773 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:34:06.624921  294773 retry.go:31] will retry after 8.085733239s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:34:06.862084  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:06.923788  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:06.923922  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:06.979854  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:07.362418  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:07.423586  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:07.423791  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:07.479883  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:07.861582  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:07.923639  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:07.923792  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:07.979719  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:34:08.018942  294773 node_ready.go:57] node "addons-523976" has "Ready":"False" status (will retry)
	I1025 09:34:08.363077  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:08.423300  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:08.423682  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:08.479258  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:08.862015  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:08.923245  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:08.923608  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:08.980050  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:09.362235  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:09.423804  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:09.424457  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:09.479536  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:09.861320  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:09.923411  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:09.923696  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:09.979746  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:34:10.019141  294773 node_ready.go:57] node "addons-523976" has "Ready":"False" status (will retry)
	I1025 09:34:10.362013  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:10.423363  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:10.425567  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:10.479664  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:10.861475  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:10.923686  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:10.923954  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:10.979832  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:11.361180  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:11.423273  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:11.423408  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:11.479937  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:11.861800  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:11.924579  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:11.924701  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:11.979391  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:34:12.019458  294773 node_ready.go:57] node "addons-523976" has "Ready":"False" status (will retry)
	I1025 09:34:12.361557  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:12.465310  294773 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1025 09:34:12.465336  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:12.465492  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:12.510216  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:12.538166  294773 node_ready.go:49] node "addons-523976" is "Ready"
	I1025 09:34:12.538197  294773 node_ready.go:38] duration metric: took 38.522425157s for node "addons-523976" to be "Ready" ...
	I1025 09:34:12.538212  294773 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:34:12.538273  294773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:34:12.557278  294773 api_server.go:72] duration metric: took 40.50307676s to wait for apiserver process to appear ...
	I1025 09:34:12.557354  294773 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:34:12.557389  294773 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1025 09:34:12.581479  294773 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1025 09:34:12.583781  294773 api_server.go:141] control plane version: v1.34.1
	I1025 09:34:12.583851  294773 api_server.go:131] duration metric: took 26.476299ms to wait for apiserver health ...
	I1025 09:34:12.583876  294773 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:34:12.609509  294773 system_pods.go:59] 19 kube-system pods found
	I1025 09:34:12.609613  294773 system_pods.go:61] "coredns-66bc5c9577-7ztdw" [2fc532e5-2871-43b2-a9ca-2155676f95a1] Pending
	I1025 09:34:12.609634  294773 system_pods.go:61] "csi-hostpath-attacher-0" [3cfeea01-8814-41b1-9059-434f7cede325] Pending
	I1025 09:34:12.609654  294773 system_pods.go:61] "csi-hostpath-resizer-0" [88a5a5d7-037a-4b2f-a2d7-b8f273e0c3cc] Pending
	I1025 09:34:12.609691  294773 system_pods.go:61] "csi-hostpathplugin-jzdxn" [a3e42799-cb35-440b-8161-0292d2e47360] Pending
	I1025 09:34:12.609712  294773 system_pods.go:61] "etcd-addons-523976" [51110d95-849a-4d23-b45e-1b9b5554f090] Running
	I1025 09:34:12.609736  294773 system_pods.go:61] "kindnet-x2lt6" [00ae329b-d096-4f67-b8b1-e27b0609b40c] Running
	I1025 09:34:12.609773  294773 system_pods.go:61] "kube-apiserver-addons-523976" [71f810d6-18ac-4eb7-bede-9fab0edd3e35] Running
	I1025 09:34:12.609795  294773 system_pods.go:61] "kube-controller-manager-addons-523976" [088d16b3-728c-4299-aa14-d6d24747e8ec] Running
	I1025 09:34:12.609834  294773 system_pods.go:61] "kube-ingress-dns-minikube" [cd99163c-521e-4204-82e9-042f4ced1951] Pending
	I1025 09:34:12.609857  294773 system_pods.go:61] "kube-proxy-sfnch" [d96f2ba5-9b51-43c5-bfdc-8b1e254d3f7c] Running
	I1025 09:34:12.609876  294773 system_pods.go:61] "kube-scheduler-addons-523976" [d4e796bf-6752-48ab-ae77-46d95e85b096] Running
	I1025 09:34:12.609910  294773 system_pods.go:61] "metrics-server-85b7d694d7-rvf2w" [3e21a96e-44a3-4a2a-832b-942702186126] Pending
	I1025 09:34:12.609935  294773 system_pods.go:61] "nvidia-device-plugin-daemonset-bc95g" [7c8dd890-af51-4a2a-97fb-d8cdb2ca8d45] Pending
	I1025 09:34:12.609954  294773 system_pods.go:61] "registry-6b586f9694-zbqtr" [85df0936-f76b-4735-8421-1890c338b1a7] Pending
	I1025 09:34:12.609996  294773 system_pods.go:61] "registry-creds-764b6fb674-8qvgv" [b074d8cf-486c-474b-868b-534d304e5e83] Pending
	I1025 09:34:12.610021  294773 system_pods.go:61] "registry-proxy-2kb6l" [f1663764-ddd7-4002-a86b-adaa75c4a254] Pending
	I1025 09:34:12.610041  294773 system_pods.go:61] "snapshot-controller-7d9fbc56b8-225jc" [612be4eb-ba87-4e21-b937-6aa4eba52f1c] Pending
	I1025 09:34:12.610076  294773 system_pods.go:61] "snapshot-controller-7d9fbc56b8-ml7zh" [bee5f140-16eb-45aa-b046-c1dfa54d55f0] Pending
	I1025 09:34:12.610107  294773 system_pods.go:61] "storage-provisioner" [318c1c01-6ea9-4c3d-b9e0-1f4f15ec8357] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:34:12.610133  294773 system_pods.go:74] duration metric: took 26.235869ms to wait for pod list to return data ...
	I1025 09:34:12.610179  294773 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:34:12.630703  294773 default_sa.go:45] found service account: "default"
	I1025 09:34:12.630778  294773 default_sa.go:55] duration metric: took 20.5781ms for default service account to be created ...
	I1025 09:34:12.630801  294773 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:34:12.662355  294773 system_pods.go:86] 19 kube-system pods found
	I1025 09:34:12.662436  294773 system_pods.go:89] "coredns-66bc5c9577-7ztdw" [2fc532e5-2871-43b2-a9ca-2155676f95a1] Pending
	I1025 09:34:12.662457  294773 system_pods.go:89] "csi-hostpath-attacher-0" [3cfeea01-8814-41b1-9059-434f7cede325] Pending
	I1025 09:34:12.662480  294773 system_pods.go:89] "csi-hostpath-resizer-0" [88a5a5d7-037a-4b2f-a2d7-b8f273e0c3cc] Pending
	I1025 09:34:12.662517  294773 system_pods.go:89] "csi-hostpathplugin-jzdxn" [a3e42799-cb35-440b-8161-0292d2e47360] Pending
	I1025 09:34:12.662541  294773 system_pods.go:89] "etcd-addons-523976" [51110d95-849a-4d23-b45e-1b9b5554f090] Running
	I1025 09:34:12.662564  294773 system_pods.go:89] "kindnet-x2lt6" [00ae329b-d096-4f67-b8b1-e27b0609b40c] Running
	I1025 09:34:12.662601  294773 system_pods.go:89] "kube-apiserver-addons-523976" [71f810d6-18ac-4eb7-bede-9fab0edd3e35] Running
	I1025 09:34:12.662627  294773 system_pods.go:89] "kube-controller-manager-addons-523976" [088d16b3-728c-4299-aa14-d6d24747e8ec] Running
	I1025 09:34:12.662653  294773 system_pods.go:89] "kube-ingress-dns-minikube" [cd99163c-521e-4204-82e9-042f4ced1951] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 09:34:12.662688  294773 system_pods.go:89] "kube-proxy-sfnch" [d96f2ba5-9b51-43c5-bfdc-8b1e254d3f7c] Running
	I1025 09:34:12.662713  294773 system_pods.go:89] "kube-scheduler-addons-523976" [d4e796bf-6752-48ab-ae77-46d95e85b096] Running
	I1025 09:34:12.662734  294773 system_pods.go:89] "metrics-server-85b7d694d7-rvf2w" [3e21a96e-44a3-4a2a-832b-942702186126] Pending
	I1025 09:34:12.662771  294773 system_pods.go:89] "nvidia-device-plugin-daemonset-bc95g" [7c8dd890-af51-4a2a-97fb-d8cdb2ca8d45] Pending
	I1025 09:34:12.662795  294773 system_pods.go:89] "registry-6b586f9694-zbqtr" [85df0936-f76b-4735-8421-1890c338b1a7] Pending
	I1025 09:34:12.662815  294773 system_pods.go:89] "registry-creds-764b6fb674-8qvgv" [b074d8cf-486c-474b-868b-534d304e5e83] Pending
	I1025 09:34:12.662854  294773 system_pods.go:89] "registry-proxy-2kb6l" [f1663764-ddd7-4002-a86b-adaa75c4a254] Pending
	I1025 09:34:12.662878  294773 system_pods.go:89] "snapshot-controller-7d9fbc56b8-225jc" [612be4eb-ba87-4e21-b937-6aa4eba52f1c] Pending
	I1025 09:34:12.662898  294773 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ml7zh" [bee5f140-16eb-45aa-b046-c1dfa54d55f0] Pending
	I1025 09:34:12.662935  294773 system_pods.go:89] "storage-provisioner" [318c1c01-6ea9-4c3d-b9e0-1f4f15ec8357] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:34:12.662970  294773 retry.go:31] will retry after 216.383731ms: missing components: kube-dns
	I1025 09:34:12.880850  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:12.931917  294773 system_pods.go:86] 19 kube-system pods found
	I1025 09:34:12.931999  294773 system_pods.go:89] "coredns-66bc5c9577-7ztdw" [2fc532e5-2871-43b2-a9ca-2155676f95a1] Pending
	I1025 09:34:12.932023  294773 system_pods.go:89] "csi-hostpath-attacher-0" [3cfeea01-8814-41b1-9059-434f7cede325] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 09:34:12.932064  294773 system_pods.go:89] "csi-hostpath-resizer-0" [88a5a5d7-037a-4b2f-a2d7-b8f273e0c3cc] Pending
	I1025 09:34:12.932092  294773 system_pods.go:89] "csi-hostpathplugin-jzdxn" [a3e42799-cb35-440b-8161-0292d2e47360] Pending
	I1025 09:34:12.932113  294773 system_pods.go:89] "etcd-addons-523976" [51110d95-849a-4d23-b45e-1b9b5554f090] Running
	I1025 09:34:12.932151  294773 system_pods.go:89] "kindnet-x2lt6" [00ae329b-d096-4f67-b8b1-e27b0609b40c] Running
	I1025 09:34:12.932176  294773 system_pods.go:89] "kube-apiserver-addons-523976" [71f810d6-18ac-4eb7-bede-9fab0edd3e35] Running
	I1025 09:34:12.932197  294773 system_pods.go:89] "kube-controller-manager-addons-523976" [088d16b3-728c-4299-aa14-d6d24747e8ec] Running
	I1025 09:34:12.932236  294773 system_pods.go:89] "kube-ingress-dns-minikube" [cd99163c-521e-4204-82e9-042f4ced1951] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 09:34:12.932261  294773 system_pods.go:89] "kube-proxy-sfnch" [d96f2ba5-9b51-43c5-bfdc-8b1e254d3f7c] Running
	I1025 09:34:12.932285  294773 system_pods.go:89] "kube-scheduler-addons-523976" [d4e796bf-6752-48ab-ae77-46d95e85b096] Running
	I1025 09:34:12.932324  294773 system_pods.go:89] "metrics-server-85b7d694d7-rvf2w" [3e21a96e-44a3-4a2a-832b-942702186126] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 09:34:12.932352  294773 system_pods.go:89] "nvidia-device-plugin-daemonset-bc95g" [7c8dd890-af51-4a2a-97fb-d8cdb2ca8d45] Pending
	I1025 09:34:12.932374  294773 system_pods.go:89] "registry-6b586f9694-zbqtr" [85df0936-f76b-4735-8421-1890c338b1a7] Pending
	I1025 09:34:12.932421  294773 system_pods.go:89] "registry-creds-764b6fb674-8qvgv" [b074d8cf-486c-474b-868b-534d304e5e83] Pending
	I1025 09:34:12.932459  294773 system_pods.go:89] "registry-proxy-2kb6l" [f1663764-ddd7-4002-a86b-adaa75c4a254] Pending
	I1025 09:34:12.932508  294773 system_pods.go:89] "snapshot-controller-7d9fbc56b8-225jc" [612be4eb-ba87-4e21-b937-6aa4eba52f1c] Pending
	I1025 09:34:12.932544  294773 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ml7zh" [bee5f140-16eb-45aa-b046-c1dfa54d55f0] Pending
	I1025 09:34:12.932581  294773 system_pods.go:89] "storage-provisioner" [318c1c01-6ea9-4c3d-b9e0-1f4f15ec8357] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:34:12.932616  294773 retry.go:31] will retry after 234.578617ms: missing components: kube-dns
	I1025 09:34:12.951849  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:12.951944  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:12.984623  294773 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1025 09:34:12.984645  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:13.174783  294773 system_pods.go:86] 19 kube-system pods found
	I1025 09:34:13.174902  294773 system_pods.go:89] "coredns-66bc5c9577-7ztdw" [2fc532e5-2871-43b2-a9ca-2155676f95a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:34:13.174931  294773 system_pods.go:89] "csi-hostpath-attacher-0" [3cfeea01-8814-41b1-9059-434f7cede325] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 09:34:13.174970  294773 system_pods.go:89] "csi-hostpath-resizer-0" [88a5a5d7-037a-4b2f-a2d7-b8f273e0c3cc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 09:34:13.175001  294773 system_pods.go:89] "csi-hostpathplugin-jzdxn" [a3e42799-cb35-440b-8161-0292d2e47360] Pending
	I1025 09:34:13.175025  294773 system_pods.go:89] "etcd-addons-523976" [51110d95-849a-4d23-b45e-1b9b5554f090] Running
	I1025 09:34:13.175058  294773 system_pods.go:89] "kindnet-x2lt6" [00ae329b-d096-4f67-b8b1-e27b0609b40c] Running
	I1025 09:34:13.175081  294773 system_pods.go:89] "kube-apiserver-addons-523976" [71f810d6-18ac-4eb7-bede-9fab0edd3e35] Running
	I1025 09:34:13.175104  294773 system_pods.go:89] "kube-controller-manager-addons-523976" [088d16b3-728c-4299-aa14-d6d24747e8ec] Running
	I1025 09:34:13.175144  294773 system_pods.go:89] "kube-ingress-dns-minikube" [cd99163c-521e-4204-82e9-042f4ced1951] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 09:34:13.175228  294773 system_pods.go:89] "kube-proxy-sfnch" [d96f2ba5-9b51-43c5-bfdc-8b1e254d3f7c] Running
	I1025 09:34:13.175250  294773 system_pods.go:89] "kube-scheduler-addons-523976" [d4e796bf-6752-48ab-ae77-46d95e85b096] Running
	I1025 09:34:13.175273  294773 system_pods.go:89] "metrics-server-85b7d694d7-rvf2w" [3e21a96e-44a3-4a2a-832b-942702186126] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 09:34:13.175309  294773 system_pods.go:89] "nvidia-device-plugin-daemonset-bc95g" [7c8dd890-af51-4a2a-97fb-d8cdb2ca8d45] Pending
	I1025 09:34:13.175335  294773 system_pods.go:89] "registry-6b586f9694-zbqtr" [85df0936-f76b-4735-8421-1890c338b1a7] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 09:34:13.175358  294773 system_pods.go:89] "registry-creds-764b6fb674-8qvgv" [b074d8cf-486c-474b-868b-534d304e5e83] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 09:34:13.175396  294773 system_pods.go:89] "registry-proxy-2kb6l" [f1663764-ddd7-4002-a86b-adaa75c4a254] Pending
	I1025 09:34:13.175422  294773 system_pods.go:89] "snapshot-controller-7d9fbc56b8-225jc" [612be4eb-ba87-4e21-b937-6aa4eba52f1c] Pending
	I1025 09:34:13.175445  294773 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ml7zh" [bee5f140-16eb-45aa-b046-c1dfa54d55f0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:34:13.175484  294773 system_pods.go:89] "storage-provisioner" [318c1c01-6ea9-4c3d-b9e0-1f4f15ec8357] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:34:13.175521  294773 retry.go:31] will retry after 436.812233ms: missing components: kube-dns
	I1025 09:34:13.367546  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:13.428639  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:13.429068  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:13.479741  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:13.627055  294773 system_pods.go:86] 19 kube-system pods found
	I1025 09:34:13.627169  294773 system_pods.go:89] "coredns-66bc5c9577-7ztdw" [2fc532e5-2871-43b2-a9ca-2155676f95a1] Running
	I1025 09:34:13.627234  294773 system_pods.go:89] "csi-hostpath-attacher-0" [3cfeea01-8814-41b1-9059-434f7cede325] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 09:34:13.627259  294773 system_pods.go:89] "csi-hostpath-resizer-0" [88a5a5d7-037a-4b2f-a2d7-b8f273e0c3cc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 09:34:13.627306  294773 system_pods.go:89] "csi-hostpathplugin-jzdxn" [a3e42799-cb35-440b-8161-0292d2e47360] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 09:34:13.627331  294773 system_pods.go:89] "etcd-addons-523976" [51110d95-849a-4d23-b45e-1b9b5554f090] Running
	I1025 09:34:13.627355  294773 system_pods.go:89] "kindnet-x2lt6" [00ae329b-d096-4f67-b8b1-e27b0609b40c] Running
	I1025 09:34:13.627394  294773 system_pods.go:89] "kube-apiserver-addons-523976" [71f810d6-18ac-4eb7-bede-9fab0edd3e35] Running
	I1025 09:34:13.627425  294773 system_pods.go:89] "kube-controller-manager-addons-523976" [088d16b3-728c-4299-aa14-d6d24747e8ec] Running
	I1025 09:34:13.627450  294773 system_pods.go:89] "kube-ingress-dns-minikube" [cd99163c-521e-4204-82e9-042f4ced1951] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 09:34:13.627488  294773 system_pods.go:89] "kube-proxy-sfnch" [d96f2ba5-9b51-43c5-bfdc-8b1e254d3f7c] Running
	I1025 09:34:13.627514  294773 system_pods.go:89] "kube-scheduler-addons-523976" [d4e796bf-6752-48ab-ae77-46d95e85b096] Running
	I1025 09:34:13.627539  294773 system_pods.go:89] "metrics-server-85b7d694d7-rvf2w" [3e21a96e-44a3-4a2a-832b-942702186126] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 09:34:13.627564  294773 system_pods.go:89] "nvidia-device-plugin-daemonset-bc95g" [7c8dd890-af51-4a2a-97fb-d8cdb2ca8d45] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 09:34:13.627610  294773 system_pods.go:89] "registry-6b586f9694-zbqtr" [85df0936-f76b-4735-8421-1890c338b1a7] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 09:34:13.627638  294773 system_pods.go:89] "registry-creds-764b6fb674-8qvgv" [b074d8cf-486c-474b-868b-534d304e5e83] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 09:34:13.627718  294773 system_pods.go:89] "registry-proxy-2kb6l" [f1663764-ddd7-4002-a86b-adaa75c4a254] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 09:34:13.627755  294773 system_pods.go:89] "snapshot-controller-7d9fbc56b8-225jc" [612be4eb-ba87-4e21-b937-6aa4eba52f1c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:34:13.627783  294773 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ml7zh" [bee5f140-16eb-45aa-b046-c1dfa54d55f0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:34:13.627827  294773 system_pods.go:89] "storage-provisioner" [318c1c01-6ea9-4c3d-b9e0-1f4f15ec8357] Running
	I1025 09:34:13.627944  294773 system_pods.go:126] duration metric: took 997.120994ms to wait for k8s-apps to be running ...
	I1025 09:34:13.627975  294773 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:34:13.628219  294773 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:34:13.708636  294773 system_svc.go:56] duration metric: took 80.651978ms WaitForService to wait for kubelet
	I1025 09:34:13.708717  294773 kubeadm.go:586] duration metric: took 41.654518417s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:34:13.708752  294773 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:34:13.712276  294773 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 09:34:13.712356  294773 node_conditions.go:123] node cpu capacity is 2
	I1025 09:34:13.712384  294773 node_conditions.go:105] duration metric: took 3.608305ms to run NodePressure ...
	I1025 09:34:13.712409  294773 start.go:241] waiting for startup goroutines ...
	I1025 09:34:13.862032  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:13.924721  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:13.925118  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:13.981282  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:14.362888  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:14.462346  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:14.462560  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:14.479836  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:14.711114  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:34:14.862899  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:14.925231  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:14.925557  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:14.980730  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:15.378559  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:15.478199  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:15.478512  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:15.480834  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:15.862041  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:15.905473  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.194305674s)
	W1025 09:34:15.905512  294773 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:34:15.905531  294773 retry.go:31] will retry after 7.709249366s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:34:15.925647  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:15.926015  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:15.980480  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:16.363141  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:16.424789  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:16.426141  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:16.480099  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:16.862628  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:16.925725  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:16.926094  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:16.981610  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:17.363362  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:17.463882  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:17.464055  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:17.479598  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:17.865049  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:17.927335  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:17.928805  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:17.979624  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:18.364135  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:18.425334  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:18.425677  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:18.480208  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:18.861224  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:18.926622  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:18.926703  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:18.981905  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:19.376323  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:19.480623  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:19.481054  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:19.493027  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:19.862333  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:19.923694  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:19.923817  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:19.981505  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:20.361714  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:20.424207  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:20.424352  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:20.479583  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:20.862060  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:20.923693  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:20.923882  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:20.980132  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:21.362457  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:21.464079  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:21.464435  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:21.480177  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:21.862351  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:21.925137  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:21.925278  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:21.979280  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:22.361629  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:22.425121  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:22.425505  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:22.480254  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:22.862040  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:22.924079  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:22.924236  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:22.980105  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:23.361115  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:23.424926  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:23.426141  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:23.480731  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:23.614974  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:34:23.861190  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:23.924809  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:23.928331  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:23.980094  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:24.361751  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:24.424779  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:24.424934  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:24.480331  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:24.676504  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.061487932s)
	W1025 09:34:24.676541  294773 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:34:24.676559  294773 retry.go:31] will retry after 16.34380046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:34:24.861592  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:24.924539  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:24.924794  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:24.979894  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:25.364115  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:25.463115  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:25.463759  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:25.479762  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:25.863865  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:25.924799  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:25.925165  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:25.980114  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:26.362783  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:26.425685  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:26.425824  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:26.480061  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:26.865437  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:26.923628  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:26.923725  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:26.980008  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:27.365261  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:27.425206  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:27.425620  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:27.481363  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:27.861695  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:27.924858  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:27.924995  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:27.980270  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:28.363264  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:28.423940  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:28.424133  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:28.480395  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:28.862476  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:28.924762  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:28.925724  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:28.980732  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:29.374916  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:29.474286  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:29.474684  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:29.480278  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:29.862032  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:29.924261  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:29.924406  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:29.979595  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:30.362934  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:30.424435  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:30.424578  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:30.479918  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:30.862091  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:30.924291  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:30.925183  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:30.979473  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:31.361964  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:31.423176  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:31.424301  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:31.479357  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:31.862507  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:31.935565  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:31.940938  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:31.980389  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:32.361574  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:32.424855  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:32.425236  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:32.482046  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:32.861520  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:32.924304  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:32.924827  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:32.981692  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:33.361491  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:33.424065  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:33.424348  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:33.479448  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:33.902703  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:33.938977  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:33.939513  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:33.992803  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:34.363314  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:34.425008  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:34.425314  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:34.479947  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:34.863944  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:34.924495  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:34.924742  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:34.979524  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:35.362854  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:35.432510  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:35.432759  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:35.480241  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:35.862303  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:35.924360  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:35.924435  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:35.980115  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:36.361781  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:36.426261  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:36.427296  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:36.480690  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:36.863229  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:36.925940  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:36.926449  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:36.980848  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:37.362227  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:37.426204  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:37.426782  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:37.480623  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:37.862126  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:37.922811  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:37.923363  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:37.979341  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:38.361489  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:38.425942  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:38.426477  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:38.479547  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:38.862012  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:38.924533  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:38.924562  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:38.979768  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:39.362066  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:39.424768  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:39.425277  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:39.481790  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:39.862175  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:39.924718  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:39.925146  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:39.979863  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:40.363510  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:40.425260  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:40.425645  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:40.479953  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:40.862118  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:40.923957  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:40.924566  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:40.980204  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:41.021556  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:34:41.362036  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:41.425215  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:41.425518  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:41.480640  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:41.862806  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:41.924157  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:41.925608  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:41.979592  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:42.103049  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.081402045s)
	W1025 09:34:42.103093  294773 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:34:42.103196  294773 retry.go:31] will retry after 25.861703469s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:34:42.361277  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:42.425325  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:42.425740  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:42.480476  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:42.861604  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:42.925265  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:42.925673  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:42.980186  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:43.361418  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:43.424837  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:43.424879  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:43.481102  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:43.863081  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:43.926863  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:43.927427  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:43.980602  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:44.362752  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:44.425887  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:44.426299  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:44.482994  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:44.861667  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:44.924400  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:44.924604  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:44.980519  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:45.388967  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:45.425450  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:45.425983  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:45.490642  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:45.863897  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:45.924896  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:45.925384  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:45.982387  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:46.361420  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:46.425453  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:46.425872  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:46.480915  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:46.863673  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:46.924595  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:46.925496  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:46.979975  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:47.362247  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:47.425032  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:47.425414  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:47.479728  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:47.862683  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:47.964004  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:47.964430  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:47.979481  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:48.361809  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:48.423574  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:48.424045  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:48.480994  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:48.862182  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:48.925018  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:48.925476  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:48.980844  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:49.362015  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:49.424733  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:49.425642  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:49.479639  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:49.864070  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:49.924327  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:49.925197  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:49.979519  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:50.361392  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:50.426760  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:50.427271  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:50.480482  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:50.861686  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:50.925272  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:50.926473  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:50.980136  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:51.361787  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:51.426870  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:51.427430  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:51.480167  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:51.861528  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:51.924341  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:51.924730  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:51.980305  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:52.362570  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:52.425436  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:52.425753  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:52.480695  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:52.861527  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:52.924702  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:52.925946  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:52.980660  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:53.362546  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:53.424359  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:53.424541  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:53.480125  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:53.861888  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:53.924639  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:53.925058  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:53.979289  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:54.361204  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:54.426256  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:54.426464  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:54.479280  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:54.861464  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:54.924502  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:54.924649  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:54.979863  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:55.362674  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:55.426822  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:55.427454  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:55.480384  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:55.862370  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:55.925055  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:55.925662  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:55.980262  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:56.361221  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:56.424024  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:56.424202  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:56.479373  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:56.861028  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:56.923494  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:56.924107  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:56.979233  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:57.361711  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:57.424780  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:57.425980  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:57.480655  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:57.862595  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:57.925182  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:57.925679  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:57.979945  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:58.371888  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:58.427055  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:58.427268  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:58.479578  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:58.862042  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:58.925987  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:58.926348  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:58.980310  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:59.361554  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:59.424273  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:59.424728  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:59.480325  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:59.861631  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:59.926939  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:59.927298  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:59.979902  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:00.371404  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:00.426511  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:00.427247  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:35:00.479729  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:00.862526  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:00.924820  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:35:00.925039  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:00.980309  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:01.362292  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:01.424071  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:01.424243  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:35:01.480553  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:01.862392  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:01.924785  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:01.925241  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:35:01.979436  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:02.362842  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:02.426166  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:02.426545  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:35:02.480571  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:02.861734  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:02.925729  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:35:02.926152  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:02.980627  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:03.362470  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:03.424047  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:35:03.424103  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:03.480142  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:03.862222  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:03.923507  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:35:03.924104  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:03.980518  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:04.362810  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:04.426157  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:04.426620  294773 kapi.go:107] duration metric: took 1m26.506683422s to wait for kubernetes.io/minikube-addons=registry ...
	I1025 09:35:04.480748  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:04.862879  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:04.924682  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:04.987618  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:05.362433  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:05.423816  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:05.480890  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:05.861938  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:05.932295  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:05.979470  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:06.361698  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:06.424157  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:06.480343  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:06.861747  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:06.924108  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:06.979129  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:07.362434  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:07.423509  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:07.480040  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:07.862314  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:07.923596  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:07.965888  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:35:07.979614  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:08.362281  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:08.423803  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:08.480130  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:08.862124  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:08.923681  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:08.986962  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:09.114950  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.149020723s)
	W1025 09:35:09.115032  294773 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:35:09.115187  294773 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1025 09:35:09.361466  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:09.424225  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:09.479567  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:09.861753  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:09.923566  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:09.979686  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:10.362245  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:10.423858  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:10.479988  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:10.861708  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:10.924074  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:10.980035  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:11.362173  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:11.423404  294773 kapi.go:107] duration metric: took 1m33.503473644s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1025 09:35:11.479599  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:11.862174  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:11.980067  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:12.362348  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:12.479873  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:12.861930  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:12.980568  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:13.362497  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:13.491357  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:13.861803  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:13.980958  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:14.361589  294773 kapi.go:107] duration metric: took 1m32.503280429s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1025 09:35:14.371456  294773 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-523976 cluster.
	I1025 09:35:14.378079  294773 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1025 09:35:14.385401  294773 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1025 09:35:14.482312  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:14.979587  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:15.480048  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:15.979254  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:16.480395  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:16.979939  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:17.479989  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:17.979420  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:18.481495  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:18.983619  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:19.480226  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:19.984762  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:20.480651  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:20.979687  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:21.481520  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:21.980572  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:22.479358  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:22.980854  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:23.479088  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:23.979864  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:24.479808  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:24.979582  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:25.479415  294773 kapi.go:107] duration metric: took 1m47.003378818s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1025 09:35:25.482606  294773 out.go:179] * Enabled addons: amd-gpu-device-plugin, storage-provisioner, registry-creds, nvidia-device-plugin, cloud-spanner, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1025 09:35:25.485477  294773 addons.go:514] duration metric: took 1m53.43086173s for enable addons: enabled=[amd-gpu-device-plugin storage-provisioner registry-creds nvidia-device-plugin cloud-spanner ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1025 09:35:25.485543  294773 start.go:246] waiting for cluster config update ...
	I1025 09:35:25.485565  294773 start.go:255] writing updated cluster config ...
	I1025 09:35:25.485911  294773 ssh_runner.go:195] Run: rm -f paused
	I1025 09:35:25.489867  294773 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:35:25.494352  294773 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7ztdw" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:25.499056  294773 pod_ready.go:94] pod "coredns-66bc5c9577-7ztdw" is "Ready"
	I1025 09:35:25.499091  294773 pod_ready.go:86] duration metric: took 4.708494ms for pod "coredns-66bc5c9577-7ztdw" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:25.501489  294773 pod_ready.go:83] waiting for pod "etcd-addons-523976" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:25.506234  294773 pod_ready.go:94] pod "etcd-addons-523976" is "Ready"
	I1025 09:35:25.506262  294773 pod_ready.go:86] duration metric: took 4.748495ms for pod "etcd-addons-523976" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:25.508809  294773 pod_ready.go:83] waiting for pod "kube-apiserver-addons-523976" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:25.513835  294773 pod_ready.go:94] pod "kube-apiserver-addons-523976" is "Ready"
	I1025 09:35:25.513868  294773 pod_ready.go:86] duration metric: took 5.032003ms for pod "kube-apiserver-addons-523976" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:25.516347  294773 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-523976" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:25.894846  294773 pod_ready.go:94] pod "kube-controller-manager-addons-523976" is "Ready"
	I1025 09:35:25.894874  294773 pod_ready.go:86] duration metric: took 378.498321ms for pod "kube-controller-manager-addons-523976" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:26.095428  294773 pod_ready.go:83] waiting for pod "kube-proxy-sfnch" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:26.494701  294773 pod_ready.go:94] pod "kube-proxy-sfnch" is "Ready"
	I1025 09:35:26.494796  294773 pod_ready.go:86] duration metric: took 399.341039ms for pod "kube-proxy-sfnch" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:26.695130  294773 pod_ready.go:83] waiting for pod "kube-scheduler-addons-523976" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:27.094436  294773 pod_ready.go:94] pod "kube-scheduler-addons-523976" is "Ready"
	I1025 09:35:27.094465  294773 pod_ready.go:86] duration metric: took 399.277956ms for pod "kube-scheduler-addons-523976" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:27.094479  294773 pod_ready.go:40] duration metric: took 1.604579657s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:35:27.410606  294773 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 09:35:27.415673  294773 out.go:179] * Done! kubectl is now configured to use "addons-523976" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 09:38:25 addons-523976 crio[831]: time="2025-10-25T09:38:25.910131035Z" level=info msg="Removed container 9b098c9de5615df78c3283a49e5fccc3c03be3cb73fe4dee5f7e8a69da7ff589: kube-system/registry-creds-764b6fb674-8qvgv/registry-creds" id=e67db237-1453-4e63-afc1-b9b19e309fdc name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 09:38:34 addons-523976 crio[831]: time="2025-10-25T09:38:34.358920447Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-5fwgq/POD" id=42c1c1c0-2b6a-4c63-9700-9ed8a34a4e4b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:38:34 addons-523976 crio[831]: time="2025-10-25T09:38:34.358990216Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:38:34 addons-523976 crio[831]: time="2025-10-25T09:38:34.378039773Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-5fwgq Namespace:default ID:5607f3d5eecb153b3139577fb3567946ffceec816abfe02c91a4c154abb57754 UID:51085969-6431-4063-8ea5-6abb01a4d61c NetNS:/var/run/netns/02f2a009-59bb-4986-b41e-f048c64fe632 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001c4c850}] Aliases:map[]}"
	Oct 25 09:38:34 addons-523976 crio[831]: time="2025-10-25T09:38:34.378623649Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-5fwgq to CNI network \"kindnet\" (type=ptp)"
	Oct 25 09:38:34 addons-523976 crio[831]: time="2025-10-25T09:38:34.400967755Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-5fwgq Namespace:default ID:5607f3d5eecb153b3139577fb3567946ffceec816abfe02c91a4c154abb57754 UID:51085969-6431-4063-8ea5-6abb01a4d61c NetNS:/var/run/netns/02f2a009-59bb-4986-b41e-f048c64fe632 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001c4c850}] Aliases:map[]}"
	Oct 25 09:38:34 addons-523976 crio[831]: time="2025-10-25T09:38:34.401292009Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-5fwgq for CNI network kindnet (type=ptp)"
	Oct 25 09:38:34 addons-523976 crio[831]: time="2025-10-25T09:38:34.411364105Z" level=info msg="Ran pod sandbox 5607f3d5eecb153b3139577fb3567946ffceec816abfe02c91a4c154abb57754 with infra container: default/hello-world-app-5d498dc89-5fwgq/POD" id=42c1c1c0-2b6a-4c63-9700-9ed8a34a4e4b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:38:34 addons-523976 crio[831]: time="2025-10-25T09:38:34.412951563Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=431bc67a-144c-4522-b207-edd280b4053c name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:38:34 addons-523976 crio[831]: time="2025-10-25T09:38:34.413198902Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=431bc67a-144c-4522-b207-edd280b4053c name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:38:34 addons-523976 crio[831]: time="2025-10-25T09:38:34.413319798Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=431bc67a-144c-4522-b207-edd280b4053c name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:38:34 addons-523976 crio[831]: time="2025-10-25T09:38:34.416762632Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=10debb2d-2e28-4c5e-bc02-86fa86019589 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:38:34 addons-523976 crio[831]: time="2025-10-25T09:38:34.420899103Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 25 09:38:35 addons-523976 crio[831]: time="2025-10-25T09:38:35.069298039Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=10debb2d-2e28-4c5e-bc02-86fa86019589 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:38:35 addons-523976 crio[831]: time="2025-10-25T09:38:35.071348305Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=d646e532-90e9-4698-9955-5ee4bd89a3f1 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:38:35 addons-523976 crio[831]: time="2025-10-25T09:38:35.07555073Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=e702da56-1ce2-4c32-ab76-3c72d2948f67 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:38:35 addons-523976 crio[831]: time="2025-10-25T09:38:35.084576126Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-5fwgq/hello-world-app" id=dbc51089-8eb1-4909-8b56-fb59b830b313 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:38:35 addons-523976 crio[831]: time="2025-10-25T09:38:35.084807669Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:38:35 addons-523976 crio[831]: time="2025-10-25T09:38:35.095675984Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:38:35 addons-523976 crio[831]: time="2025-10-25T09:38:35.095895581Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/424dacc035cbce291fc1b21cf64f56065bf366d9327f1d82ad98e98f94a2a5ff/merged/etc/passwd: no such file or directory"
	Oct 25 09:38:35 addons-523976 crio[831]: time="2025-10-25T09:38:35.095918063Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/424dacc035cbce291fc1b21cf64f56065bf366d9327f1d82ad98e98f94a2a5ff/merged/etc/group: no such file or directory"
	Oct 25 09:38:35 addons-523976 crio[831]: time="2025-10-25T09:38:35.096590171Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:38:35 addons-523976 crio[831]: time="2025-10-25T09:38:35.119675865Z" level=info msg="Created container d22b01f4617a7387d3c208900b792772554f0fcb319dce7638aa11407758576c: default/hello-world-app-5d498dc89-5fwgq/hello-world-app" id=dbc51089-8eb1-4909-8b56-fb59b830b313 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:38:35 addons-523976 crio[831]: time="2025-10-25T09:38:35.123117214Z" level=info msg="Starting container: d22b01f4617a7387d3c208900b792772554f0fcb319dce7638aa11407758576c" id=3a3d9759-0226-45bb-9ce5-7a2182c7119c name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:38:35 addons-523976 crio[831]: time="2025-10-25T09:38:35.127184498Z" level=info msg="Started container" PID=7233 containerID=d22b01f4617a7387d3c208900b792772554f0fcb319dce7638aa11407758576c description=default/hello-world-app-5d498dc89-5fwgq/hello-world-app id=3a3d9759-0226-45bb-9ce5-7a2182c7119c name=/runtime.v1.RuntimeService/StartContainer sandboxID=5607f3d5eecb153b3139577fb3567946ffceec816abfe02c91a4c154abb57754
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	d22b01f4617a7       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   5607f3d5eecb1       hello-world-app-5d498dc89-5fwgq             default
	828a7a20fec8b       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             10 seconds ago           Exited              registry-creds                           1                   91e4e0a0f2c13       registry-creds-764b6fb674-8qvgv             kube-system
	fd64f5a68a97c       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0                                              2 minutes ago            Running             nginx                                    0                   1b4070394bb0a       nginx                                       default
	95759d5756a3e       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          3 minutes ago            Running             busybox                                  0                   23a17b309db44       busybox                                     default
	ed2b16c18354f       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   18d047a3ac6db       csi-hostpathplugin-jzdxn                    kube-system
	5aa333b5df5f3       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   18d047a3ac6db       csi-hostpathplugin-jzdxn                    kube-system
	7fa48c16691b3       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   18d047a3ac6db       csi-hostpathplugin-jzdxn                    kube-system
	dba2f43dd64c7       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   18d047a3ac6db       csi-hostpathplugin-jzdxn                    kube-system
	c0b6483dcceab       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            3 minutes ago            Running             gadget                                   0                   ff76404a183b7       gadget-47j62                                gadget
	a28457a8a0b99       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   18d047a3ac6db       csi-hostpathplugin-jzdxn                    kube-system
	a85b223a43799       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   ade14d6ee2368       gcp-auth-78565c9fb4-sv7g4                   gcp-auth
	7526138c74ffc       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             3 minutes ago            Running             controller                               0                   c63d3cbd08d49       ingress-nginx-controller-675c5ddd98-bs2mg   ingress-nginx
	f6953e828b170       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   3 minutes ago            Exited              patch                                    0                   9475cc6c28e4e       ingress-nginx-admission-patch-gd8wq         ingress-nginx
	50e4b1142cbe6       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           3 minutes ago            Running             registry                                 0                   3b75b6242f09a       registry-6b586f9694-zbqtr                   kube-system
	39f4662ae9889       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   a0ad6b08471b2       nvidia-device-plugin-daemonset-bc95g        kube-system
	1b9e3466e10d1       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   ae7d2b4221b95       registry-proxy-2kb6l                        kube-system
	0fb942eed357a       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   306b86391f445       kube-ingress-dns-minikube                   kube-system
	cf39d704be2f7       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   4 minutes ago            Running             csi-external-health-monitor-controller   0                   18d047a3ac6db       csi-hostpathplugin-jzdxn                    kube-system
	52ad74c56c561       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              4 minutes ago            Running             yakd                                     0                   4b01ebe945ad7       yakd-dashboard-5ff678cb9-4p7gh              yakd-dashboard
	8ffb283dc89c3       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             4 minutes ago            Running             local-path-provisioner                   0                   87188b0223099       local-path-provisioner-648f6765c9-b49pk     local-path-storage
	76b94cc8a56cd       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              4 minutes ago            Running             csi-resizer                              0                   3474cd324087b       csi-hostpath-resizer-0                      kube-system
	bd4a3acf65df2       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             4 minutes ago            Running             csi-attacher                             0                   f35f7e7bf2689       csi-hostpath-attacher-0                     kube-system
	c2ce713b653ba       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               4 minutes ago            Running             cloud-spanner-emulator                   0                   01fc6df7d665b       cloud-spanner-emulator-86bd5cbb97-4j9jb     default
	3552646870f03       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   f33fe217f6ca6       snapshot-controller-7d9fbc56b8-ml7zh        kube-system
	1671d001e906d       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago            Running             volume-snapshot-controller               0                   702ab1aafa6bc       snapshot-controller-7d9fbc56b8-225jc        kube-system
	526fdc9a670ac       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago            Running             metrics-server                           0                   8f72727748e62       metrics-server-85b7d694d7-rvf2w             kube-system
	dc240a0f2902a       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             4 minutes ago            Exited              create                                   0                   fd6d4c4db03c8       ingress-nginx-admission-create-z2xx7        ingress-nginx
	baef1ffd7c044       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   00299902c1d1b       storage-provisioner                         kube-system
	08942e9bb2ed5       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   09fc4f35a43a0       coredns-66bc5c9577-7ztdw                    kube-system
	1522372d280f9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             5 minutes ago            Running             kindnet-cni                              0                   1fa4251ee3f13       kindnet-x2lt6                               kube-system
	a34c99943c936       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             5 minutes ago            Running             kube-proxy                               0                   977202e858a1e       kube-proxy-sfnch                            kube-system
	dc95e59147e2e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago            Running             kube-scheduler                           0                   c49ae75a87042       kube-scheduler-addons-523976                kube-system
	3f9ccf0f1d26a       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago            Running             etcd                                     0                   a5b7f5746f60e       etcd-addons-523976                          kube-system
	20bbcad0ad16d       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago            Running             kube-apiserver                           0                   b991f7b33446f       kube-apiserver-addons-523976                kube-system
	78f4542e06d9b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago            Running             kube-controller-manager                  0                   e0f61e42ee1d4       kube-controller-manager-addons-523976       kube-system
	
	
	==> coredns [08942e9bb2ed5fa71628fa7fb77f43e33bb0c653db62df711678b474d20c96ec] <==
	[INFO] 10.244.0.14:35614 - 50306 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001612707s
	[INFO] 10.244.0.14:35614 - 14746 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000095025s
	[INFO] 10.244.0.14:35614 - 8531 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000076349s
	[INFO] 10.244.0.14:52949 - 31067 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000145151s
	[INFO] 10.244.0.14:52949 - 31288 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000093146s
	[INFO] 10.244.0.14:35691 - 24587 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000079172s
	[INFO] 10.244.0.14:35691 - 24374 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000180122s
	[INFO] 10.244.0.14:43133 - 9999 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000080977s
	[INFO] 10.244.0.14:43133 - 9802 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000059349s
	[INFO] 10.244.0.14:48822 - 53661 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001635846s
	[INFO] 10.244.0.14:48822 - 53842 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001569129s
	[INFO] 10.244.0.14:45126 - 40056 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000103952s
	[INFO] 10.244.0.14:45126 - 39917 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000099325s
	[INFO] 10.244.0.20:57576 - 62403 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000182632s
	[INFO] 10.244.0.20:55826 - 48568 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000087755s
	[INFO] 10.244.0.20:34350 - 37491 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000190461s
	[INFO] 10.244.0.20:45288 - 45649 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00008275s
	[INFO] 10.244.0.20:34389 - 9784 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000139613s
	[INFO] 10.244.0.20:43730 - 26631 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000064517s
	[INFO] 10.244.0.20:57026 - 20742 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001876506s
	[INFO] 10.244.0.20:60046 - 5008 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00225626s
	[INFO] 10.244.0.20:45951 - 53029 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001463134s
	[INFO] 10.244.0.20:34130 - 57338 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001063293s
	[INFO] 10.244.0.23:59021 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000190157s
	[INFO] 10.244.0.23:47287 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000138004s
	
	
	==> describe nodes <==
	Name:               addons-523976
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-523976
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=addons-523976
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_33_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-523976
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-523976"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:33:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-523976
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:38:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:38:31 +0000   Sat, 25 Oct 2025 09:33:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:38:31 +0000   Sat, 25 Oct 2025 09:33:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:38:31 +0000   Sat, 25 Oct 2025 09:33:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:38:31 +0000   Sat, 25 Oct 2025 09:34:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-523976
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                0ef687d5-da5c-4f15-a993-7ab4a5927695
	  Boot ID:                    3729fb0b-b441-4078-a4f7-ae0fb40e9fa4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
	  default                     cloud-spanner-emulator-86bd5cbb97-4j9jb      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  default                     hello-world-app-5d498dc89-5fwgq              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  gadget                      gadget-47j62                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  gcp-auth                    gcp-auth-78565c9fb4-sv7g4                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-bs2mg    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m58s
	  kube-system                 coredns-66bc5c9577-7ztdw                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m4s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 csi-hostpathplugin-jzdxn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 etcd-addons-523976                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m9s
	  kube-system                 kindnet-x2lt6                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m4s
	  kube-system                 kube-apiserver-addons-523976                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 kube-controller-manager-addons-523976        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kube-system                 kube-proxy-sfnch                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kube-scheduler-addons-523976                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 metrics-server-85b7d694d7-rvf2w              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m59s
	  kube-system                 nvidia-device-plugin-daemonset-bc95g         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 registry-6b586f9694-zbqtr                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kube-system                 registry-creds-764b6fb674-8qvgv              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 registry-proxy-2kb6l                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 snapshot-controller-7d9fbc56b8-225jc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 snapshot-controller-7d9fbc56b8-ml7zh         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  local-path-storage          local-path-provisioner-648f6765c9-b49pk      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-4p7gh               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 5m3s   kube-proxy       
	  Normal   Starting                 5m9s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m9s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m9s   kubelet          Node addons-523976 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m9s   kubelet          Node addons-523976 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m9s   kubelet          Node addons-523976 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m5s   node-controller  Node addons-523976 event: Registered Node addons-523976 in Controller
	  Normal   NodeReady                4m23s  kubelet          Node addons-523976 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct25 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015587] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503041] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.036759] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.769713] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.474162] kauditd_printk_skb: 36 callbacks suppressed
	[Oct25 08:29] hrtimer: interrupt took 30248914 ns
	[Oct25 09:08] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Oct25 09:31] kauditd_printk_skb: 8 callbacks suppressed
	[Oct25 09:33] overlayfs: idmapped layers are currently not supported
	[  +0.069522] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [3f9ccf0f1d26a6dd05ec9d553991a3c6f9f4da2ba3138959689b1a91c99e7666] <==
	{"level":"warn","ts":"2025-10-25T09:33:22.242664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:22.257219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:22.273790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:22.310720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:22.335185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:22.357607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:22.361964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:22.378553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:22.403347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:22.418249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:22.432370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:22.472467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:22.473137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:22.484918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:22.507286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:22.539978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:22.568266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:22.582277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:22.696115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:38.912296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:38.963546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:00.469271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:00.485281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:00.544627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:00.564739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39164","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [a85b223a43799148b940ebf69ec28f62644781722898fd7f8089aba4eb872729] <==
	2025/10/25 09:35:13 GCP Auth Webhook started!
	2025/10/25 09:35:28 Ready to marshal response ...
	2025/10/25 09:35:28 Ready to write response ...
	2025/10/25 09:35:28 Ready to marshal response ...
	2025/10/25 09:35:28 Ready to write response ...
	2025/10/25 09:35:28 Ready to marshal response ...
	2025/10/25 09:35:28 Ready to write response ...
	2025/10/25 09:35:48 Ready to marshal response ...
	2025/10/25 09:35:48 Ready to write response ...
	2025/10/25 09:35:51 Ready to marshal response ...
	2025/10/25 09:35:51 Ready to write response ...
	2025/10/25 09:35:51 Ready to marshal response ...
	2025/10/25 09:35:51 Ready to write response ...
	2025/10/25 09:35:59 Ready to marshal response ...
	2025/10/25 09:35:59 Ready to write response ...
	2025/10/25 09:36:14 Ready to marshal response ...
	2025/10/25 09:36:14 Ready to write response ...
	2025/10/25 09:36:16 Ready to marshal response ...
	2025/10/25 09:36:16 Ready to write response ...
	2025/10/25 09:36:50 Ready to marshal response ...
	2025/10/25 09:36:50 Ready to write response ...
	2025/10/25 09:38:33 Ready to marshal response ...
	2025/10/25 09:38:33 Ready to write response ...
	
	
	==> kernel <==
	 09:38:35 up  1:21,  0 user,  load average: 0.68, 2.13, 3.00
	Linux addons-523976 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1522372d280f9cea2d797057a2ac964089f52ed80759c765101c5ae665e9beaa] <==
	I1025 09:36:31.991074       1 main.go:301] handling current node
	I1025 09:36:41.992781       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:36:41.992829       1 main.go:301] handling current node
	I1025 09:36:51.991044       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:36:51.991200       1 main.go:301] handling current node
	I1025 09:37:01.992041       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:37:01.992075       1 main.go:301] handling current node
	I1025 09:37:11.993323       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:37:11.993366       1 main.go:301] handling current node
	I1025 09:37:21.991237       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:37:21.991272       1 main.go:301] handling current node
	I1025 09:37:31.995821       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:37:31.995923       1 main.go:301] handling current node
	I1025 09:37:41.997290       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:37:41.997408       1 main.go:301] handling current node
	I1025 09:37:52.001370       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:37:52.001572       1 main.go:301] handling current node
	I1025 09:38:01.991092       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:38:01.991453       1 main.go:301] handling current node
	I1025 09:38:11.995792       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:38:11.995834       1 main.go:301] handling current node
	I1025 09:38:22.005832       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:38:22.005948       1 main.go:301] handling current node
	I1025 09:38:31.991901       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:38:31.992018       1 main.go:301] handling current node
	
	
	==> kube-apiserver [20bbcad0ad16d6b2ae5b3ca2e2ab5ba1053638cd0640b1fb4d021b25a50f8234] <==
	E1025 09:34:29.737860       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1025 09:34:29.737875       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1025 09:34:29.737949       1 handler_proxy.go:99] no RequestInfo found in the context
	E1025 09:34:29.738025       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1025 09:34:29.739127       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1025 09:34:33.773337       1 handler_proxy.go:99] no RequestInfo found in the context
	E1025 09:34:33.773391       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1025 09:34:33.775060       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.193.112:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.193.112:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	I1025 09:34:33.823294       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1025 09:34:33.878118       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1025 09:35:37.813678       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:40482: use of closed network connection
	E1025 09:35:38.055921       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:40512: use of closed network connection
	E1025 09:35:38.188410       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:40526: use of closed network connection
	I1025 09:36:14.236453       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1025 09:36:14.534479       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.26.39"}
	I1025 09:36:28.234884       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1025 09:36:30.485749       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E1025 09:36:57.594185       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1025 09:38:33.888702       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.40.25"}
	
	
	==> kube-controller-manager [78f4542e06d9b5434e621ac06a3ce1eec88a9d17fee72f530244bb01d81a42bb] <==
	I1025 09:33:30.492759       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 09:33:30.493337       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 09:33:30.493468       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 09:33:30.493526       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 09:33:30.493767       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 09:33:30.494339       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 09:33:30.494631       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 09:33:30.495071       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 09:33:30.495126       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 09:33:30.495195       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 09:33:30.495216       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 09:33:30.495455       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 09:33:30.510952       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:33:30.522793       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1025 09:33:36.937000       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1025 09:34:00.461065       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1025 09:34:00.461228       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1025 09:34:00.461288       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1025 09:34:00.531559       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1025 09:34:00.537064       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1025 09:34:00.562705       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:34:00.637270       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:34:15.470661       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1025 09:34:30.567804       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1025 09:34:30.652935       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [a34c99943c9360cd74f2410f3e1dc4a77dcfde99ee71da412b0ee5916e892d1b] <==
	I1025 09:33:31.777845       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:33:31.877718       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:33:31.978318       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:33:31.978357       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1025 09:33:31.978419       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:33:32.014117       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:33:32.014182       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:33:32.018779       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:33:32.019104       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:33:32.019129       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:33:32.020569       1 config.go:200] "Starting service config controller"
	I1025 09:33:32.020595       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:33:32.020615       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:33:32.020620       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:33:32.020654       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:33:32.020664       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:33:32.021301       1 config.go:309] "Starting node config controller"
	I1025 09:33:32.021320       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:33:32.021326       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:33:32.120826       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:33:32.120826       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:33:32.120854       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [dc95e59147e2eec17d430ab406fa8f6b59a62de8f50462f7a609116945f95fd1] <==
	I1025 09:33:24.064333       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:33:24.068979       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:33:24.069138       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:33:24.069165       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:33:24.069183       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1025 09:33:24.079348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1025 09:33:24.079706       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 09:33:24.079759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 09:33:24.079805       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 09:33:24.079850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 09:33:24.079894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 09:33:24.079933       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 09:33:24.079975       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 09:33:24.080013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 09:33:24.080057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:33:24.080098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 09:33:24.080147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 09:33:24.080193       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 09:33:24.080236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 09:33:24.080277       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:33:24.080328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 09:33:24.080457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 09:33:24.080503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 09:33:24.080565       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1025 09:33:25.269516       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:36:57 addons-523976 kubelet[1282]: I1025 09:36:57.606095    1282 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-33c69c61-51d7-47ea-acfa-e0b3f7c8d4a6" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^21553e43-b186-11f0-a241-eab552196ee9") on node "addons-523976"
	Oct 25 09:36:57 addons-523976 kubelet[1282]: I1025 09:36:57.689920    1282 reconciler_common.go:299] "Volume detached for volume \"pvc-33c69c61-51d7-47ea-acfa-e0b3f7c8d4a6\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^21553e43-b186-11f0-a241-eab552196ee9\") on node \"addons-523976\" DevicePath \"\""
	Oct 25 09:36:58 addons-523976 kubelet[1282]: I1025 09:36:58.269562    1282 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b53b78a-678e-4ed2-ace3-8b22fbdaf21b" path="/var/lib/kubelet/pods/8b53b78a-678e-4ed2-ace3-8b22fbdaf21b/volumes"
	Oct 25 09:37:07 addons-523976 kubelet[1282]: E1025 09:37:07.251114    1282 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/crio/crio-1e38716319b45c61e7111b2a4620e0d04f706869df1502e2513d18dab6406d5c\": RecentStats: unable to find data in memory cache]"
	Oct 25 09:37:17 addons-523976 kubelet[1282]: E1025 09:37:17.288039    1282 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/crio/crio-1e38716319b45c61e7111b2a4620e0d04f706869df1502e2513d18dab6406d5c\": RecentStats: unable to find data in memory cache]"
	Oct 25 09:37:18 addons-523976 kubelet[1282]: I1025 09:37:18.259412    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-2kb6l" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:37:19 addons-523976 kubelet[1282]: I1025 09:37:19.259238    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-bc95g" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:37:19 addons-523976 kubelet[1282]: E1025 09:37:19.466445    1282 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/crio/crio-1e38716319b45c61e7111b2a4620e0d04f706869df1502e2513d18dab6406d5c\": RecentStats: unable to find data in memory cache]"
	Oct 25 09:37:25 addons-523976 kubelet[1282]: I1025 09:37:25.259381    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-zbqtr" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:38:23 addons-523976 kubelet[1282]: I1025 09:38:23.459355    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-8qvgv" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:38:23 addons-523976 kubelet[1282]: W1025 09:38:23.484509    1282 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9fc15dbb1b0abf2c8c465291568d03010a5cd699b548062f138e8f27aea60bc1/crio-91e4e0a0f2c13b8f20b52429a903199e09c6083d35480733ffbcfe9ebe2b3312 WatchSource:0}: Error finding container 91e4e0a0f2c13b8f20b52429a903199e09c6083d35480733ffbcfe9ebe2b3312: Status 404 returned error can't find the container with id 91e4e0a0f2c13b8f20b52429a903199e09c6083d35480733ffbcfe9ebe2b3312
	Oct 25 09:38:24 addons-523976 kubelet[1282]: I1025 09:38:24.873995    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-8qvgv" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:38:24 addons-523976 kubelet[1282]: I1025 09:38:24.874059    1282 scope.go:117] "RemoveContainer" containerID="9b098c9de5615df78c3283a49e5fccc3c03be3cb73fe4dee5f7e8a69da7ff589"
	Oct 25 09:38:25 addons-523976 kubelet[1282]: I1025 09:38:25.885734    1282 scope.go:117] "RemoveContainer" containerID="9b098c9de5615df78c3283a49e5fccc3c03be3cb73fe4dee5f7e8a69da7ff589"
	Oct 25 09:38:25 addons-523976 kubelet[1282]: I1025 09:38:25.886201    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-8qvgv" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:38:25 addons-523976 kubelet[1282]: I1025 09:38:25.887296    1282 scope.go:117] "RemoveContainer" containerID="828a7a20fec8b822e9a59af7c91adcd150a5bd41f92f6c93d0ad5c044af87693"
	Oct 25 09:38:25 addons-523976 kubelet[1282]: E1025 09:38:25.887894    1282 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-8qvgv_kube-system(b074d8cf-486c-474b-868b-534d304e5e83)\"" pod="kube-system/registry-creds-764b6fb674-8qvgv" podUID="b074d8cf-486c-474b-868b-534d304e5e83"
	Oct 25 09:38:26 addons-523976 kubelet[1282]: I1025 09:38:26.890353    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-8qvgv" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:38:26 addons-523976 kubelet[1282]: I1025 09:38:26.890411    1282 scope.go:117] "RemoveContainer" containerID="828a7a20fec8b822e9a59af7c91adcd150a5bd41f92f6c93d0ad5c044af87693"
	Oct 25 09:38:26 addons-523976 kubelet[1282]: E1025 09:38:26.890567    1282 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-8qvgv_kube-system(b074d8cf-486c-474b-868b-534d304e5e83)\"" pod="kube-system/registry-creds-764b6fb674-8qvgv" podUID="b074d8cf-486c-474b-868b-534d304e5e83"
	Oct 25 09:38:27 addons-523976 kubelet[1282]: I1025 09:38:27.259582    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-zbqtr" secret="" err="secret \"gcp-auth\" not found"
	Oct 25 09:38:33 addons-523976 kubelet[1282]: I1025 09:38:33.952616    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/51085969-6431-4063-8ea5-6abb01a4d61c-gcp-creds\") pod \"hello-world-app-5d498dc89-5fwgq\" (UID: \"51085969-6431-4063-8ea5-6abb01a4d61c\") " pod="default/hello-world-app-5d498dc89-5fwgq"
	Oct 25 09:38:33 addons-523976 kubelet[1282]: I1025 09:38:33.953213    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcmz4\" (UniqueName: \"kubernetes.io/projected/51085969-6431-4063-8ea5-6abb01a4d61c-kube-api-access-fcmz4\") pod \"hello-world-app-5d498dc89-5fwgq\" (UID: \"51085969-6431-4063-8ea5-6abb01a4d61c\") " pod="default/hello-world-app-5d498dc89-5fwgq"
	Oct 25 09:38:34 addons-523976 kubelet[1282]: W1025 09:38:34.411635    1282 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9fc15dbb1b0abf2c8c465291568d03010a5cd699b548062f138e8f27aea60bc1/crio-5607f3d5eecb153b3139577fb3567946ffceec816abfe02c91a4c154abb57754 WatchSource:0}: Error finding container 5607f3d5eecb153b3139577fb3567946ffceec816abfe02c91a4c154abb57754: Status 404 returned error can't find the container with id 5607f3d5eecb153b3139577fb3567946ffceec816abfe02c91a4c154abb57754
	Oct 25 09:38:35 addons-523976 kubelet[1282]: I1025 09:38:35.944930    1282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-5fwgq" podStartSLOduration=2.286448327 podStartE2EDuration="2.944910046s" podCreationTimestamp="2025-10-25 09:38:33 +0000 UTC" firstStartedPulling="2025-10-25 09:38:34.413693743 +0000 UTC m=+308.276542396" lastFinishedPulling="2025-10-25 09:38:35.072155462 +0000 UTC m=+308.935004115" observedRunningTime="2025-10-25 09:38:35.943952553 +0000 UTC m=+309.806801271" watchObservedRunningTime="2025-10-25 09:38:35.944910046 +0000 UTC m=+309.807758707"
	
	
	==> storage-provisioner [baef1ffd7c044365d9d03e5ce421585250bb6e1abf553ca3388e8ed456f0238b] <==
	W1025 09:38:10.615103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:12.618243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:12.625251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:14.628956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:14.634192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:16.637263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:16.644019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:18.647761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:18.653031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:20.656229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:20.660659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:22.664387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:22.670999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:24.675397       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:24.681377       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:26.687495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:26.691842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:28.694769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:28.699339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:30.702357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:30.706647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:32.710031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:32.717055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:34.720338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:38:34.725119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-523976 -n addons-523976
helpers_test.go:269: (dbg) Run:  kubectl --context addons-523976 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-z2xx7 ingress-nginx-admission-patch-gd8wq
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-523976 describe pod ingress-nginx-admission-create-z2xx7 ingress-nginx-admission-patch-gd8wq
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-523976 describe pod ingress-nginx-admission-create-z2xx7 ingress-nginx-admission-patch-gd8wq: exit status 1 (86.746898ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-z2xx7" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-gd8wq" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-523976 describe pod ingress-nginx-admission-create-z2xx7 ingress-nginx-admission-patch-gd8wq: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-523976 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-523976 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (323.321952ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:38:37.064821  304460 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:38:37.065634  304460 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:38:37.065676  304460 out.go:374] Setting ErrFile to fd 2...
	I1025 09:38:37.065697  304460 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:38:37.065997  304460 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 09:38:37.066361  304460 mustload.go:65] Loading cluster: addons-523976
	I1025 09:38:37.066868  304460 config.go:182] Loaded profile config "addons-523976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:38:37.066908  304460 addons.go:606] checking whether the cluster is paused
	I1025 09:38:37.067057  304460 config.go:182] Loaded profile config "addons-523976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:38:37.067087  304460 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:38:37.067618  304460 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:38:37.093177  304460 ssh_runner.go:195] Run: systemctl --version
	I1025 09:38:37.093300  304460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:38:37.120072  304460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:38:37.242827  304460 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:38:37.242913  304460 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:38:37.290642  304460 cri.go:89] found id: "828a7a20fec8b822e9a59af7c91adcd150a5bd41f92f6c93d0ad5c044af87693"
	I1025 09:38:37.290667  304460 cri.go:89] found id: "ed2b16c18354ffa6267ae4706c62ecb69903697108528bf3863d2c2f803003ab"
	I1025 09:38:37.290672  304460 cri.go:89] found id: "5aa333b5df5f3ee286cd80ea5e23a63933f660a1b393c5976044b6e482a2c97d"
	I1025 09:38:37.290676  304460 cri.go:89] found id: "7fa48c16691b308c6bf2eb952b017336f242e28234f560131888e2d56e48e118"
	I1025 09:38:37.290679  304460 cri.go:89] found id: "dba2f43dd64c7e26ab6c50f62acff63a9a44dd4cb23eb4a87be444c1da4780f8"
	I1025 09:38:37.290683  304460 cri.go:89] found id: "a28457a8a0b99896fc7bf9cfedc8e6bb01604c4ff4898d1fa276c7414283da93"
	I1025 09:38:37.290686  304460 cri.go:89] found id: "50e4b1142cbe65528bd1fbea23f4a7cf8003971e787273be5ee9d761dd7d289a"
	I1025 09:38:37.290689  304460 cri.go:89] found id: "39f4662ae9889bd547a3a4f4113124067f374286ab741f16041dc41fb4114334"
	I1025 09:38:37.290692  304460 cri.go:89] found id: "1b9e3466e10d18a916b8b3ec8e9512ca940197df8b87e324321530625c61950d"
	I1025 09:38:37.290699  304460 cri.go:89] found id: "0fb942eed357aedba394a71016114374a6b2ed13681c0c5c38b14300e6a78c97"
	I1025 09:38:37.290702  304460 cri.go:89] found id: "cf39d704be2f7601f61a7c4b76d66ca29278bce7cb61230b512dc0e395167c83"
	I1025 09:38:37.290704  304460 cri.go:89] found id: "76b94cc8a56cdd1dd2786cce2dee6201336592abca494e3c2d0a0c44be0abba4"
	I1025 09:38:37.290707  304460 cri.go:89] found id: "bd4a3acf65df2db5f82683dcc498529829e037415304a75de206142446234ac5"
	I1025 09:38:37.290711  304460 cri.go:89] found id: "3552646870f0354b62b73c1619bc8ef8234250c45937619a3d46141689e64913"
	I1025 09:38:37.290714  304460 cri.go:89] found id: "1671d001e906d6b3e01b5bc2e97abf3d2e04f6eb1403aa9fa3280a571355cb59"
	I1025 09:38:37.290719  304460 cri.go:89] found id: "526fdc9a670ac5982ffa9e1688d32a36fac0c0a2596260db2fb60eec5c732d15"
	I1025 09:38:37.290722  304460 cri.go:89] found id: "baef1ffd7c044365d9d03e5ce421585250bb6e1abf553ca3388e8ed456f0238b"
	I1025 09:38:37.290725  304460 cri.go:89] found id: "08942e9bb2ed5fa71628fa7fb77f43e33bb0c653db62df711678b474d20c96ec"
	I1025 09:38:37.290728  304460 cri.go:89] found id: "1522372d280f9cea2d797057a2ac964089f52ed80759c765101c5ae665e9beaa"
	I1025 09:38:37.290731  304460 cri.go:89] found id: "a34c99943c9360cd74f2410f3e1dc4a77dcfde99ee71da412b0ee5916e892d1b"
	I1025 09:38:37.290735  304460 cri.go:89] found id: "dc95e59147e2eec17d430ab406fa8f6b59a62de8f50462f7a609116945f95fd1"
	I1025 09:38:37.290738  304460 cri.go:89] found id: "3f9ccf0f1d26a6dd05ec9d553991a3c6f9f4da2ba3138959689b1a91c99e7666"
	I1025 09:38:37.290741  304460 cri.go:89] found id: "20bbcad0ad16d6b2ae5b3ca2e2ab5ba1053638cd0640b1fb4d021b25a50f8234"
	I1025 09:38:37.290744  304460 cri.go:89] found id: "78f4542e06d9b5434e621ac06a3ce1eec88a9d17fee72f530244bb01d81a42bb"
	I1025 09:38:37.290747  304460 cri.go:89] found id: ""
	I1025 09:38:37.290797  304460 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:38:37.308196  304460 out.go:203] 
	W1025 09:38:37.311283  304460 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:38:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:38:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:38:37.311313  304460 out.go:285] * 
	* 
	W1025 09:38:37.317833  304460 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:38:37.320876  304460 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-523976 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-523976 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-523976 addons disable ingress --alsologtostderr -v=1: exit status 11 (283.613309ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:38:37.380279  304581 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:38:37.381116  304581 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:38:37.381149  304581 out.go:374] Setting ErrFile to fd 2...
	I1025 09:38:37.381182  304581 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:38:37.381482  304581 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 09:38:37.381811  304581 mustload.go:65] Loading cluster: addons-523976
	I1025 09:38:37.382228  304581 config.go:182] Loaded profile config "addons-523976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:38:37.382274  304581 addons.go:606] checking whether the cluster is paused
	I1025 09:38:37.382401  304581 config.go:182] Loaded profile config "addons-523976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:38:37.382435  304581 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:38:37.382916  304581 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:38:37.402093  304581 ssh_runner.go:195] Run: systemctl --version
	I1025 09:38:37.402168  304581 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:38:37.420470  304581 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:38:37.526558  304581 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:38:37.526647  304581 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:38:37.572164  304581 cri.go:89] found id: "828a7a20fec8b822e9a59af7c91adcd150a5bd41f92f6c93d0ad5c044af87693"
	I1025 09:38:37.572189  304581 cri.go:89] found id: "ed2b16c18354ffa6267ae4706c62ecb69903697108528bf3863d2c2f803003ab"
	I1025 09:38:37.572194  304581 cri.go:89] found id: "5aa333b5df5f3ee286cd80ea5e23a63933f660a1b393c5976044b6e482a2c97d"
	I1025 09:38:37.572198  304581 cri.go:89] found id: "7fa48c16691b308c6bf2eb952b017336f242e28234f560131888e2d56e48e118"
	I1025 09:38:37.572201  304581 cri.go:89] found id: "dba2f43dd64c7e26ab6c50f62acff63a9a44dd4cb23eb4a87be444c1da4780f8"
	I1025 09:38:37.572205  304581 cri.go:89] found id: "a28457a8a0b99896fc7bf9cfedc8e6bb01604c4ff4898d1fa276c7414283da93"
	I1025 09:38:37.572208  304581 cri.go:89] found id: "50e4b1142cbe65528bd1fbea23f4a7cf8003971e787273be5ee9d761dd7d289a"
	I1025 09:38:37.572212  304581 cri.go:89] found id: "39f4662ae9889bd547a3a4f4113124067f374286ab741f16041dc41fb4114334"
	I1025 09:38:37.572215  304581 cri.go:89] found id: "1b9e3466e10d18a916b8b3ec8e9512ca940197df8b87e324321530625c61950d"
	I1025 09:38:37.572222  304581 cri.go:89] found id: "0fb942eed357aedba394a71016114374a6b2ed13681c0c5c38b14300e6a78c97"
	I1025 09:38:37.572225  304581 cri.go:89] found id: "cf39d704be2f7601f61a7c4b76d66ca29278bce7cb61230b512dc0e395167c83"
	I1025 09:38:37.572228  304581 cri.go:89] found id: "76b94cc8a56cdd1dd2786cce2dee6201336592abca494e3c2d0a0c44be0abba4"
	I1025 09:38:37.572231  304581 cri.go:89] found id: "bd4a3acf65df2db5f82683dcc498529829e037415304a75de206142446234ac5"
	I1025 09:38:37.572235  304581 cri.go:89] found id: "3552646870f0354b62b73c1619bc8ef8234250c45937619a3d46141689e64913"
	I1025 09:38:37.572238  304581 cri.go:89] found id: "1671d001e906d6b3e01b5bc2e97abf3d2e04f6eb1403aa9fa3280a571355cb59"
	I1025 09:38:37.572243  304581 cri.go:89] found id: "526fdc9a670ac5982ffa9e1688d32a36fac0c0a2596260db2fb60eec5c732d15"
	I1025 09:38:37.572246  304581 cri.go:89] found id: "baef1ffd7c044365d9d03e5ce421585250bb6e1abf553ca3388e8ed456f0238b"
	I1025 09:38:37.572250  304581 cri.go:89] found id: "08942e9bb2ed5fa71628fa7fb77f43e33bb0c653db62df711678b474d20c96ec"
	I1025 09:38:37.572253  304581 cri.go:89] found id: "1522372d280f9cea2d797057a2ac964089f52ed80759c765101c5ae665e9beaa"
	I1025 09:38:37.572259  304581 cri.go:89] found id: "a34c99943c9360cd74f2410f3e1dc4a77dcfde99ee71da412b0ee5916e892d1b"
	I1025 09:38:37.572264  304581 cri.go:89] found id: "dc95e59147e2eec17d430ab406fa8f6b59a62de8f50462f7a609116945f95fd1"
	I1025 09:38:37.572267  304581 cri.go:89] found id: "3f9ccf0f1d26a6dd05ec9d553991a3c6f9f4da2ba3138959689b1a91c99e7666"
	I1025 09:38:37.572283  304581 cri.go:89] found id: "20bbcad0ad16d6b2ae5b3ca2e2ab5ba1053638cd0640b1fb4d021b25a50f8234"
	I1025 09:38:37.572293  304581 cri.go:89] found id: "78f4542e06d9b5434e621ac06a3ce1eec88a9d17fee72f530244bb01d81a42bb"
	I1025 09:38:37.572296  304581 cri.go:89] found id: ""
	I1025 09:38:37.572363  304581 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:38:37.592052  304581 out.go:203] 
	W1025 09:38:37.594919  304581 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:38:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:38:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:38:37.594975  304581 out.go:285] * 
	* 
	W1025 09:38:37.601423  304581 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:38:37.604370  304581 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-523976 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (143.69s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-47j62" [27d5d811-45e6-44cd-9de5-e2c666671079] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003584348s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-523976 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-523976 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (279.242696ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:36:13.689392  302306 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:36:13.690268  302306 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:36:13.690284  302306 out.go:374] Setting ErrFile to fd 2...
	I1025 09:36:13.690291  302306 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:36:13.690589  302306 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 09:36:13.690924  302306 mustload.go:65] Loading cluster: addons-523976
	I1025 09:36:13.691388  302306 config.go:182] Loaded profile config "addons-523976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:36:13.691410  302306 addons.go:606] checking whether the cluster is paused
	I1025 09:36:13.691554  302306 config.go:182] Loaded profile config "addons-523976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:36:13.691573  302306 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:36:13.692106  302306 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:36:13.710652  302306 ssh_runner.go:195] Run: systemctl --version
	I1025 09:36:13.710711  302306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:36:13.728033  302306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:36:13.832237  302306 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:36:13.832360  302306 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:36:13.883510  302306 cri.go:89] found id: "ed2b16c18354ffa6267ae4706c62ecb69903697108528bf3863d2c2f803003ab"
	I1025 09:36:13.883534  302306 cri.go:89] found id: "5aa333b5df5f3ee286cd80ea5e23a63933f660a1b393c5976044b6e482a2c97d"
	I1025 09:36:13.883539  302306 cri.go:89] found id: "7fa48c16691b308c6bf2eb952b017336f242e28234f560131888e2d56e48e118"
	I1025 09:36:13.883543  302306 cri.go:89] found id: "dba2f43dd64c7e26ab6c50f62acff63a9a44dd4cb23eb4a87be444c1da4780f8"
	I1025 09:36:13.883546  302306 cri.go:89] found id: "a28457a8a0b99896fc7bf9cfedc8e6bb01604c4ff4898d1fa276c7414283da93"
	I1025 09:36:13.883549  302306 cri.go:89] found id: "50e4b1142cbe65528bd1fbea23f4a7cf8003971e787273be5ee9d761dd7d289a"
	I1025 09:36:13.883552  302306 cri.go:89] found id: "39f4662ae9889bd547a3a4f4113124067f374286ab741f16041dc41fb4114334"
	I1025 09:36:13.883555  302306 cri.go:89] found id: "1b9e3466e10d18a916b8b3ec8e9512ca940197df8b87e324321530625c61950d"
	I1025 09:36:13.883558  302306 cri.go:89] found id: "0fb942eed357aedba394a71016114374a6b2ed13681c0c5c38b14300e6a78c97"
	I1025 09:36:13.883564  302306 cri.go:89] found id: "cf39d704be2f7601f61a7c4b76d66ca29278bce7cb61230b512dc0e395167c83"
	I1025 09:36:13.883567  302306 cri.go:89] found id: "76b94cc8a56cdd1dd2786cce2dee6201336592abca494e3c2d0a0c44be0abba4"
	I1025 09:36:13.883570  302306 cri.go:89] found id: "bd4a3acf65df2db5f82683dcc498529829e037415304a75de206142446234ac5"
	I1025 09:36:13.883573  302306 cri.go:89] found id: "3552646870f0354b62b73c1619bc8ef8234250c45937619a3d46141689e64913"
	I1025 09:36:13.883577  302306 cri.go:89] found id: "1671d001e906d6b3e01b5bc2e97abf3d2e04f6eb1403aa9fa3280a571355cb59"
	I1025 09:36:13.883580  302306 cri.go:89] found id: "526fdc9a670ac5982ffa9e1688d32a36fac0c0a2596260db2fb60eec5c732d15"
	I1025 09:36:13.883588  302306 cri.go:89] found id: "baef1ffd7c044365d9d03e5ce421585250bb6e1abf553ca3388e8ed456f0238b"
	I1025 09:36:13.883591  302306 cri.go:89] found id: "08942e9bb2ed5fa71628fa7fb77f43e33bb0c653db62df711678b474d20c96ec"
	I1025 09:36:13.883595  302306 cri.go:89] found id: "1522372d280f9cea2d797057a2ac964089f52ed80759c765101c5ae665e9beaa"
	I1025 09:36:13.883599  302306 cri.go:89] found id: "a34c99943c9360cd74f2410f3e1dc4a77dcfde99ee71da412b0ee5916e892d1b"
	I1025 09:36:13.883602  302306 cri.go:89] found id: "dc95e59147e2eec17d430ab406fa8f6b59a62de8f50462f7a609116945f95fd1"
	I1025 09:36:13.883605  302306 cri.go:89] found id: "3f9ccf0f1d26a6dd05ec9d553991a3c6f9f4da2ba3138959689b1a91c99e7666"
	I1025 09:36:13.883609  302306 cri.go:89] found id: "20bbcad0ad16d6b2ae5b3ca2e2ab5ba1053638cd0640b1fb4d021b25a50f8234"
	I1025 09:36:13.883612  302306 cri.go:89] found id: "78f4542e06d9b5434e621ac06a3ce1eec88a9d17fee72f530244bb01d81a42bb"
	I1025 09:36:13.883615  302306 cri.go:89] found id: ""
	I1025 09:36:13.883667  302306 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:36:13.903339  302306 out.go:203] 
	W1025 09:36:13.906102  302306 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:36:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:36:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:36:13.906128  302306 out.go:285] * 
	* 
	W1025 09:36:13.912501  302306 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:36:13.915361  302306 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-523976 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.36s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.002847ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-rvf2w" [3e21a96e-44a3-4a2a-832b-942702186126] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003824328s
addons_test.go:463: (dbg) Run:  kubectl --context addons-523976 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-523976 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-523976 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (255.757335ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:36:08.428399  302225 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:36:08.429144  302225 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:36:08.429158  302225 out.go:374] Setting ErrFile to fd 2...
	I1025 09:36:08.429164  302225 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:36:08.429431  302225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 09:36:08.429732  302225 mustload.go:65] Loading cluster: addons-523976
	I1025 09:36:08.430094  302225 config.go:182] Loaded profile config "addons-523976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:36:08.430112  302225 addons.go:606] checking whether the cluster is paused
	I1025 09:36:08.430211  302225 config.go:182] Loaded profile config "addons-523976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:36:08.430225  302225 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:36:08.430651  302225 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:36:08.448406  302225 ssh_runner.go:195] Run: systemctl --version
	I1025 09:36:08.448471  302225 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:36:08.466719  302225 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:36:08.569700  302225 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:36:08.569784  302225 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:36:08.600066  302225 cri.go:89] found id: "ed2b16c18354ffa6267ae4706c62ecb69903697108528bf3863d2c2f803003ab"
	I1025 09:36:08.600093  302225 cri.go:89] found id: "5aa333b5df5f3ee286cd80ea5e23a63933f660a1b393c5976044b6e482a2c97d"
	I1025 09:36:08.600099  302225 cri.go:89] found id: "7fa48c16691b308c6bf2eb952b017336f242e28234f560131888e2d56e48e118"
	I1025 09:36:08.600103  302225 cri.go:89] found id: "dba2f43dd64c7e26ab6c50f62acff63a9a44dd4cb23eb4a87be444c1da4780f8"
	I1025 09:36:08.600107  302225 cri.go:89] found id: "a28457a8a0b99896fc7bf9cfedc8e6bb01604c4ff4898d1fa276c7414283da93"
	I1025 09:36:08.600110  302225 cri.go:89] found id: "50e4b1142cbe65528bd1fbea23f4a7cf8003971e787273be5ee9d761dd7d289a"
	I1025 09:36:08.600113  302225 cri.go:89] found id: "39f4662ae9889bd547a3a4f4113124067f374286ab741f16041dc41fb4114334"
	I1025 09:36:08.600116  302225 cri.go:89] found id: "1b9e3466e10d18a916b8b3ec8e9512ca940197df8b87e324321530625c61950d"
	I1025 09:36:08.600119  302225 cri.go:89] found id: "0fb942eed357aedba394a71016114374a6b2ed13681c0c5c38b14300e6a78c97"
	I1025 09:36:08.600129  302225 cri.go:89] found id: "cf39d704be2f7601f61a7c4b76d66ca29278bce7cb61230b512dc0e395167c83"
	I1025 09:36:08.600133  302225 cri.go:89] found id: "76b94cc8a56cdd1dd2786cce2dee6201336592abca494e3c2d0a0c44be0abba4"
	I1025 09:36:08.600137  302225 cri.go:89] found id: "bd4a3acf65df2db5f82683dcc498529829e037415304a75de206142446234ac5"
	I1025 09:36:08.600140  302225 cri.go:89] found id: "3552646870f0354b62b73c1619bc8ef8234250c45937619a3d46141689e64913"
	I1025 09:36:08.600143  302225 cri.go:89] found id: "1671d001e906d6b3e01b5bc2e97abf3d2e04f6eb1403aa9fa3280a571355cb59"
	I1025 09:36:08.600147  302225 cri.go:89] found id: "526fdc9a670ac5982ffa9e1688d32a36fac0c0a2596260db2fb60eec5c732d15"
	I1025 09:36:08.600154  302225 cri.go:89] found id: "baef1ffd7c044365d9d03e5ce421585250bb6e1abf553ca3388e8ed456f0238b"
	I1025 09:36:08.600158  302225 cri.go:89] found id: "08942e9bb2ed5fa71628fa7fb77f43e33bb0c653db62df711678b474d20c96ec"
	I1025 09:36:08.600163  302225 cri.go:89] found id: "1522372d280f9cea2d797057a2ac964089f52ed80759c765101c5ae665e9beaa"
	I1025 09:36:08.600166  302225 cri.go:89] found id: "a34c99943c9360cd74f2410f3e1dc4a77dcfde99ee71da412b0ee5916e892d1b"
	I1025 09:36:08.600169  302225 cri.go:89] found id: "dc95e59147e2eec17d430ab406fa8f6b59a62de8f50462f7a609116945f95fd1"
	I1025 09:36:08.600173  302225 cri.go:89] found id: "3f9ccf0f1d26a6dd05ec9d553991a3c6f9f4da2ba3138959689b1a91c99e7666"
	I1025 09:36:08.600176  302225 cri.go:89] found id: "20bbcad0ad16d6b2ae5b3ca2e2ab5ba1053638cd0640b1fb4d021b25a50f8234"
	I1025 09:36:08.600179  302225 cri.go:89] found id: "78f4542e06d9b5434e621ac06a3ce1eec88a9d17fee72f530244bb01d81a42bb"
	I1025 09:36:08.600182  302225 cri.go:89] found id: ""
	I1025 09:36:08.600257  302225 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:36:08.615124  302225 out.go:203] 
	W1025 09:36:08.617921  302225 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:36:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:36:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:36:08.617949  302225 out.go:285] * 
	* 
	W1025 09:36:08.624357  302225 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:36:08.627431  302225 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-523976 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.36s)

                                                
                                    
x
+
TestAddons/parallel/CSI (58.98s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1025 09:35:59.570247  294017 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1025 09:35:59.574722  294017 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1025 09:35:59.574749  294017 kapi.go:107] duration metric: took 4.517599ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.529136ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-523976 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-523976 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [8cb79d18-f971-461b-b847-22853b6e0361] Pending
helpers_test.go:352: "task-pv-pod" [8cb79d18-f971-461b-b847-22853b6e0361] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [8cb79d18-f971-461b-b847-22853b6e0361] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.004115458s
addons_test.go:572: (dbg) Run:  kubectl --context addons-523976 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-523976 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-523976 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-523976 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-523976 delete pod task-pv-pod: (1.167668095s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-523976 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-523976 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-523976 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [8b53b78a-678e-4ed2-ace3-8b22fbdaf21b] Pending
helpers_test.go:352: "task-pv-pod-restore" [8b53b78a-678e-4ed2-ace3-8b22fbdaf21b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [8b53b78a-678e-4ed2-ace3-8b22fbdaf21b] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.00407529s
addons_test.go:614: (dbg) Run:  kubectl --context addons-523976 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-523976 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-523976 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-523976 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-523976 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (273.5382ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:36:58.067464  303331 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:36:58.068244  303331 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:36:58.068285  303331 out.go:374] Setting ErrFile to fd 2...
	I1025 09:36:58.068311  303331 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:36:58.068643  303331 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 09:36:58.068964  303331 mustload.go:65] Loading cluster: addons-523976
	I1025 09:36:58.069551  303331 config.go:182] Loaded profile config "addons-523976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:36:58.069613  303331 addons.go:606] checking whether the cluster is paused
	I1025 09:36:58.069773  303331 config.go:182] Loaded profile config "addons-523976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:36:58.069810  303331 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:36:58.070299  303331 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:36:58.088552  303331 ssh_runner.go:195] Run: systemctl --version
	I1025 09:36:58.088626  303331 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:36:58.105817  303331 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:36:58.209650  303331 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:36:58.209749  303331 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:36:58.240046  303331 cri.go:89] found id: "ed2b16c18354ffa6267ae4706c62ecb69903697108528bf3863d2c2f803003ab"
	I1025 09:36:58.240071  303331 cri.go:89] found id: "5aa333b5df5f3ee286cd80ea5e23a63933f660a1b393c5976044b6e482a2c97d"
	I1025 09:36:58.240078  303331 cri.go:89] found id: "7fa48c16691b308c6bf2eb952b017336f242e28234f560131888e2d56e48e118"
	I1025 09:36:58.240082  303331 cri.go:89] found id: "dba2f43dd64c7e26ab6c50f62acff63a9a44dd4cb23eb4a87be444c1da4780f8"
	I1025 09:36:58.240086  303331 cri.go:89] found id: "a28457a8a0b99896fc7bf9cfedc8e6bb01604c4ff4898d1fa276c7414283da93"
	I1025 09:36:58.240090  303331 cri.go:89] found id: "50e4b1142cbe65528bd1fbea23f4a7cf8003971e787273be5ee9d761dd7d289a"
	I1025 09:36:58.240093  303331 cri.go:89] found id: "39f4662ae9889bd547a3a4f4113124067f374286ab741f16041dc41fb4114334"
	I1025 09:36:58.240096  303331 cri.go:89] found id: "1b9e3466e10d18a916b8b3ec8e9512ca940197df8b87e324321530625c61950d"
	I1025 09:36:58.240100  303331 cri.go:89] found id: "0fb942eed357aedba394a71016114374a6b2ed13681c0c5c38b14300e6a78c97"
	I1025 09:36:58.240107  303331 cri.go:89] found id: "cf39d704be2f7601f61a7c4b76d66ca29278bce7cb61230b512dc0e395167c83"
	I1025 09:36:58.240112  303331 cri.go:89] found id: "76b94cc8a56cdd1dd2786cce2dee6201336592abca494e3c2d0a0c44be0abba4"
	I1025 09:36:58.240120  303331 cri.go:89] found id: "bd4a3acf65df2db5f82683dcc498529829e037415304a75de206142446234ac5"
	I1025 09:36:58.240124  303331 cri.go:89] found id: "3552646870f0354b62b73c1619bc8ef8234250c45937619a3d46141689e64913"
	I1025 09:36:58.240127  303331 cri.go:89] found id: "1671d001e906d6b3e01b5bc2e97abf3d2e04f6eb1403aa9fa3280a571355cb59"
	I1025 09:36:58.240131  303331 cri.go:89] found id: "526fdc9a670ac5982ffa9e1688d32a36fac0c0a2596260db2fb60eec5c732d15"
	I1025 09:36:58.240136  303331 cri.go:89] found id: "baef1ffd7c044365d9d03e5ce421585250bb6e1abf553ca3388e8ed456f0238b"
	I1025 09:36:58.240145  303331 cri.go:89] found id: "08942e9bb2ed5fa71628fa7fb77f43e33bb0c653db62df711678b474d20c96ec"
	I1025 09:36:58.240149  303331 cri.go:89] found id: "1522372d280f9cea2d797057a2ac964089f52ed80759c765101c5ae665e9beaa"
	I1025 09:36:58.240153  303331 cri.go:89] found id: "a34c99943c9360cd74f2410f3e1dc4a77dcfde99ee71da412b0ee5916e892d1b"
	I1025 09:36:58.240156  303331 cri.go:89] found id: "dc95e59147e2eec17d430ab406fa8f6b59a62de8f50462f7a609116945f95fd1"
	I1025 09:36:58.240161  303331 cri.go:89] found id: "3f9ccf0f1d26a6dd05ec9d553991a3c6f9f4da2ba3138959689b1a91c99e7666"
	I1025 09:36:58.240164  303331 cri.go:89] found id: "20bbcad0ad16d6b2ae5b3ca2e2ab5ba1053638cd0640b1fb4d021b25a50f8234"
	I1025 09:36:58.240169  303331 cri.go:89] found id: "78f4542e06d9b5434e621ac06a3ce1eec88a9d17fee72f530244bb01d81a42bb"
	I1025 09:36:58.240174  303331 cri.go:89] found id: ""
	I1025 09:36:58.240230  303331 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:36:58.256062  303331 out.go:203] 
	W1025 09:36:58.258996  303331 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:36:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:36:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:36:58.259076  303331 out.go:285] * 
	* 
	W1025 09:36:58.266096  303331 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:36:58.269269  303331 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-523976 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-523976 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-523976 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (270.343476ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:36:58.329343  303375 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:36:58.330246  303375 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:36:58.330262  303375 out.go:374] Setting ErrFile to fd 2...
	I1025 09:36:58.330267  303375 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:36:58.330566  303375 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 09:36:58.330898  303375 mustload.go:65] Loading cluster: addons-523976
	I1025 09:36:58.331346  303375 config.go:182] Loaded profile config "addons-523976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:36:58.331367  303375 addons.go:606] checking whether the cluster is paused
	I1025 09:36:58.331507  303375 config.go:182] Loaded profile config "addons-523976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:36:58.331525  303375 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:36:58.332020  303375 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:36:58.350369  303375 ssh_runner.go:195] Run: systemctl --version
	I1025 09:36:58.350432  303375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:36:58.375731  303375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:36:58.481411  303375 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:36:58.481541  303375 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:36:58.513186  303375 cri.go:89] found id: "ed2b16c18354ffa6267ae4706c62ecb69903697108528bf3863d2c2f803003ab"
	I1025 09:36:58.513208  303375 cri.go:89] found id: "5aa333b5df5f3ee286cd80ea5e23a63933f660a1b393c5976044b6e482a2c97d"
	I1025 09:36:58.513219  303375 cri.go:89] found id: "7fa48c16691b308c6bf2eb952b017336f242e28234f560131888e2d56e48e118"
	I1025 09:36:58.513224  303375 cri.go:89] found id: "dba2f43dd64c7e26ab6c50f62acff63a9a44dd4cb23eb4a87be444c1da4780f8"
	I1025 09:36:58.513227  303375 cri.go:89] found id: "a28457a8a0b99896fc7bf9cfedc8e6bb01604c4ff4898d1fa276c7414283da93"
	I1025 09:36:58.513230  303375 cri.go:89] found id: "50e4b1142cbe65528bd1fbea23f4a7cf8003971e787273be5ee9d761dd7d289a"
	I1025 09:36:58.513233  303375 cri.go:89] found id: "39f4662ae9889bd547a3a4f4113124067f374286ab741f16041dc41fb4114334"
	I1025 09:36:58.513259  303375 cri.go:89] found id: "1b9e3466e10d18a916b8b3ec8e9512ca940197df8b87e324321530625c61950d"
	I1025 09:36:58.513263  303375 cri.go:89] found id: "0fb942eed357aedba394a71016114374a6b2ed13681c0c5c38b14300e6a78c97"
	I1025 09:36:58.513269  303375 cri.go:89] found id: "cf39d704be2f7601f61a7c4b76d66ca29278bce7cb61230b512dc0e395167c83"
	I1025 09:36:58.513272  303375 cri.go:89] found id: "76b94cc8a56cdd1dd2786cce2dee6201336592abca494e3c2d0a0c44be0abba4"
	I1025 09:36:58.513275  303375 cri.go:89] found id: "bd4a3acf65df2db5f82683dcc498529829e037415304a75de206142446234ac5"
	I1025 09:36:58.513279  303375 cri.go:89] found id: "3552646870f0354b62b73c1619bc8ef8234250c45937619a3d46141689e64913"
	I1025 09:36:58.513282  303375 cri.go:89] found id: "1671d001e906d6b3e01b5bc2e97abf3d2e04f6eb1403aa9fa3280a571355cb59"
	I1025 09:36:58.513285  303375 cri.go:89] found id: "526fdc9a670ac5982ffa9e1688d32a36fac0c0a2596260db2fb60eec5c732d15"
	I1025 09:36:58.513290  303375 cri.go:89] found id: "baef1ffd7c044365d9d03e5ce421585250bb6e1abf553ca3388e8ed456f0238b"
	I1025 09:36:58.513297  303375 cri.go:89] found id: "08942e9bb2ed5fa71628fa7fb77f43e33bb0c653db62df711678b474d20c96ec"
	I1025 09:36:58.513301  303375 cri.go:89] found id: "1522372d280f9cea2d797057a2ac964089f52ed80759c765101c5ae665e9beaa"
	I1025 09:36:58.513304  303375 cri.go:89] found id: "a34c99943c9360cd74f2410f3e1dc4a77dcfde99ee71da412b0ee5916e892d1b"
	I1025 09:36:58.513307  303375 cri.go:89] found id: "dc95e59147e2eec17d430ab406fa8f6b59a62de8f50462f7a609116945f95fd1"
	I1025 09:36:58.513312  303375 cri.go:89] found id: "3f9ccf0f1d26a6dd05ec9d553991a3c6f9f4da2ba3138959689b1a91c99e7666"
	I1025 09:36:58.513330  303375 cri.go:89] found id: "20bbcad0ad16d6b2ae5b3ca2e2ab5ba1053638cd0640b1fb4d021b25a50f8234"
	I1025 09:36:58.513341  303375 cri.go:89] found id: "78f4542e06d9b5434e621ac06a3ce1eec88a9d17fee72f530244bb01d81a42bb"
	I1025 09:36:58.513345  303375 cri.go:89] found id: ""
	I1025 09:36:58.513413  303375 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:36:58.527913  303375 out.go:203] 
	W1025 09:36:58.530905  303375 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:36:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:36:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:36:58.530930  303375 out.go:285] * 
	* 
	W1025 09:36:58.537240  303375 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:36:58.540358  303375 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-523976 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (58.98s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-523976 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-523976 --alsologtostderr -v=1: exit status 11 (387.248443ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:35:59.491429  301571 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:35:59.495760  301571 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:59.495822  301571 out.go:374] Setting ErrFile to fd 2...
	I1025 09:35:59.495843  301571 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:59.496162  301571 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 09:35:59.496509  301571 mustload.go:65] Loading cluster: addons-523976
	I1025 09:35:59.496935  301571 config.go:182] Loaded profile config "addons-523976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:59.496981  301571 addons.go:606] checking whether the cluster is paused
	I1025 09:35:59.497112  301571 config.go:182] Loaded profile config "addons-523976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:59.497146  301571 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:35:59.497648  301571 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:35:59.528926  301571 ssh_runner.go:195] Run: systemctl --version
	I1025 09:35:59.528981  301571 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:35:59.562584  301571 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:35:59.698932  301571 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:35:59.699010  301571 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:35:59.751447  301571 cri.go:89] found id: "ed2b16c18354ffa6267ae4706c62ecb69903697108528bf3863d2c2f803003ab"
	I1025 09:35:59.751466  301571 cri.go:89] found id: "5aa333b5df5f3ee286cd80ea5e23a63933f660a1b393c5976044b6e482a2c97d"
	I1025 09:35:59.751471  301571 cri.go:89] found id: "7fa48c16691b308c6bf2eb952b017336f242e28234f560131888e2d56e48e118"
	I1025 09:35:59.751475  301571 cri.go:89] found id: "dba2f43dd64c7e26ab6c50f62acff63a9a44dd4cb23eb4a87be444c1da4780f8"
	I1025 09:35:59.751478  301571 cri.go:89] found id: "a28457a8a0b99896fc7bf9cfedc8e6bb01604c4ff4898d1fa276c7414283da93"
	I1025 09:35:59.751483  301571 cri.go:89] found id: "50e4b1142cbe65528bd1fbea23f4a7cf8003971e787273be5ee9d761dd7d289a"
	I1025 09:35:59.751486  301571 cri.go:89] found id: "39f4662ae9889bd547a3a4f4113124067f374286ab741f16041dc41fb4114334"
	I1025 09:35:59.751489  301571 cri.go:89] found id: "1b9e3466e10d18a916b8b3ec8e9512ca940197df8b87e324321530625c61950d"
	I1025 09:35:59.751492  301571 cri.go:89] found id: "0fb942eed357aedba394a71016114374a6b2ed13681c0c5c38b14300e6a78c97"
	I1025 09:35:59.751498  301571 cri.go:89] found id: "cf39d704be2f7601f61a7c4b76d66ca29278bce7cb61230b512dc0e395167c83"
	I1025 09:35:59.751501  301571 cri.go:89] found id: "76b94cc8a56cdd1dd2786cce2dee6201336592abca494e3c2d0a0c44be0abba4"
	I1025 09:35:59.751505  301571 cri.go:89] found id: "bd4a3acf65df2db5f82683dcc498529829e037415304a75de206142446234ac5"
	I1025 09:35:59.751508  301571 cri.go:89] found id: "3552646870f0354b62b73c1619bc8ef8234250c45937619a3d46141689e64913"
	I1025 09:35:59.751511  301571 cri.go:89] found id: "1671d001e906d6b3e01b5bc2e97abf3d2e04f6eb1403aa9fa3280a571355cb59"
	I1025 09:35:59.751514  301571 cri.go:89] found id: "526fdc9a670ac5982ffa9e1688d32a36fac0c0a2596260db2fb60eec5c732d15"
	I1025 09:35:59.751519  301571 cri.go:89] found id: "baef1ffd7c044365d9d03e5ce421585250bb6e1abf553ca3388e8ed456f0238b"
	I1025 09:35:59.751522  301571 cri.go:89] found id: "08942e9bb2ed5fa71628fa7fb77f43e33bb0c653db62df711678b474d20c96ec"
	I1025 09:35:59.751527  301571 cri.go:89] found id: "1522372d280f9cea2d797057a2ac964089f52ed80759c765101c5ae665e9beaa"
	I1025 09:35:59.751530  301571 cri.go:89] found id: "a34c99943c9360cd74f2410f3e1dc4a77dcfde99ee71da412b0ee5916e892d1b"
	I1025 09:35:59.751533  301571 cri.go:89] found id: "dc95e59147e2eec17d430ab406fa8f6b59a62de8f50462f7a609116945f95fd1"
	I1025 09:35:59.751539  301571 cri.go:89] found id: "3f9ccf0f1d26a6dd05ec9d553991a3c6f9f4da2ba3138959689b1a91c99e7666"
	I1025 09:35:59.751542  301571 cri.go:89] found id: "20bbcad0ad16d6b2ae5b3ca2e2ab5ba1053638cd0640b1fb4d021b25a50f8234"
	I1025 09:35:59.751545  301571 cri.go:89] found id: "78f4542e06d9b5434e621ac06a3ce1eec88a9d17fee72f530244bb01d81a42bb"
	I1025 09:35:59.751548  301571 cri.go:89] found id: ""
	I1025 09:35:59.751597  301571 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:35:59.773881  301571 out.go:203] 
	W1025 09:35:59.776894  301571 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:35:59.776927  301571 out.go:285] * 
	* 
	W1025 09:35:59.787960  301571 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:35:59.792250  301571 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-523976 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-523976
helpers_test.go:243: (dbg) docker inspect addons-523976:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9fc15dbb1b0abf2c8c465291568d03010a5cd699b548062f138e8f27aea60bc1",
	        "Created": "2025-10-25T09:32:59.9140353Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 295176,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:32:59.987197113Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/9fc15dbb1b0abf2c8c465291568d03010a5cd699b548062f138e8f27aea60bc1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9fc15dbb1b0abf2c8c465291568d03010a5cd699b548062f138e8f27aea60bc1/hostname",
	        "HostsPath": "/var/lib/docker/containers/9fc15dbb1b0abf2c8c465291568d03010a5cd699b548062f138e8f27aea60bc1/hosts",
	        "LogPath": "/var/lib/docker/containers/9fc15dbb1b0abf2c8c465291568d03010a5cd699b548062f138e8f27aea60bc1/9fc15dbb1b0abf2c8c465291568d03010a5cd699b548062f138e8f27aea60bc1-json.log",
	        "Name": "/addons-523976",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-523976:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-523976",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9fc15dbb1b0abf2c8c465291568d03010a5cd699b548062f138e8f27aea60bc1",
	                "LowerDir": "/var/lib/docker/overlay2/1266035770b017aa847ebb80f1c0de1e645922080c1edf8222d76dc66700b3a0-init/diff:/var/lib/docker/overlay2/56244b4f60319ba406f5ebe406da3f45bd5764c167111669ae2d84794cf94518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1266035770b017aa847ebb80f1c0de1e645922080c1edf8222d76dc66700b3a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1266035770b017aa847ebb80f1c0de1e645922080c1edf8222d76dc66700b3a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1266035770b017aa847ebb80f1c0de1e645922080c1edf8222d76dc66700b3a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-523976",
	                "Source": "/var/lib/docker/volumes/addons-523976/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-523976",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-523976",
	                "name.minikube.sigs.k8s.io": "addons-523976",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f27cd7116b2f7b226bc58fd2974beb86d7d23d60a1c9828b992a93e933600536",
	            "SandboxKey": "/var/run/docker/netns/f27cd7116b2f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-523976": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:d6:a5:d7:54:0b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1bc5fb69b22cf2a4fbdd9de449d489f38af903ee1ee0d6eb29d9ffd0fa06e1ba",
	                    "EndpointID": "6f07d95b66d1649dba8c31fa7de1fb050d190fbd03a382942539f4b12c117ce5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-523976",
	                        "9fc15dbb1b0a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-523976 -n addons-523976
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-523976 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-523976 logs -n 25: (1.569871998s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-147571 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-147571   │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ delete  │ -p download-only-147571                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-147571   │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ start   │ -o=json --download-only -p download-only-828998 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-828998   │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ delete  │ -p download-only-828998                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-828998   │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ delete  │ -p download-only-147571                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-147571   │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ delete  │ -p download-only-828998                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-828998   │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ start   │ --download-only -p download-docker-545529 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-545529 │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │                     │
	│ delete  │ -p download-docker-545529                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-545529 │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ start   │ --download-only -p binary-mirror-490963 --alsologtostderr --binary-mirror http://127.0.0.1:46207 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-490963   │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │                     │
	│ delete  │ -p binary-mirror-490963                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-490963   │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ addons  │ enable dashboard -p addons-523976                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-523976          │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │                     │
	│ addons  │ disable dashboard -p addons-523976                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-523976          │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │                     │
	│ start   │ -p addons-523976 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-523976          │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:35 UTC │
	│ addons  │ addons-523976 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-523976          │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ addons  │ addons-523976 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-523976          │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ addons  │ addons-523976 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-523976          │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ addons  │ addons-523976 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-523976          │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ ip      │ addons-523976 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-523976          │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ addons  │ addons-523976 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-523976          │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ ssh     │ addons-523976 ssh cat /opt/local-path-provisioner/pvc-e0338399-28dc-478f-89a3-735d9bdcfa58_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-523976          │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │ 25 Oct 25 09:35 UTC │
	│ addons  │ addons-523976 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-523976          │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ addons  │ addons-523976 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-523976          │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	│ addons  │ enable headlamp -p addons-523976 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-523976          │ jenkins │ v1.37.0 │ 25 Oct 25 09:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:32:33
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:32:33.407760  294773 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:32:33.407878  294773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:32:33.407914  294773 out.go:374] Setting ErrFile to fd 2...
	I1025 09:32:33.407925  294773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:32:33.408191  294773 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 09:32:33.408634  294773 out.go:368] Setting JSON to false
	I1025 09:32:33.409430  294773 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4503,"bootTime":1761380250,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1025 09:32:33.409500  294773 start.go:141] virtualization:  
	I1025 09:32:33.412886  294773 out.go:179] * [addons-523976] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 09:32:33.416589  294773 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 09:32:33.416635  294773 notify.go:220] Checking for updates...
	I1025 09:32:33.419618  294773 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:32:33.422520  294773 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 09:32:33.425393  294773 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-292167/.minikube
	I1025 09:32:33.428609  294773 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 09:32:33.431446  294773 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:32:33.434475  294773 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:32:33.468035  294773 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 09:32:33.468164  294773 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:32:33.534040  294773 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-25 09:32:33.5249696 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:32:33.534143  294773 docker.go:318] overlay module found
	I1025 09:32:33.537170  294773 out.go:179] * Using the docker driver based on user configuration
	I1025 09:32:33.540059  294773 start.go:305] selected driver: docker
	I1025 09:32:33.540094  294773 start.go:925] validating driver "docker" against <nil>
	I1025 09:32:33.540108  294773 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:32:33.540852  294773 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:32:33.598380  294773 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-25 09:32:33.588800484 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:32:33.598590  294773 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 09:32:33.598953  294773 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:32:33.602022  294773 out.go:179] * Using Docker driver with root privileges
	I1025 09:32:33.604991  294773 cni.go:84] Creating CNI manager for ""
	I1025 09:32:33.605079  294773 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:32:33.605095  294773 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 09:32:33.605179  294773 start.go:349] cluster config:
	{Name:addons-523976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-523976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1025 09:32:33.610210  294773 out.go:179] * Starting "addons-523976" primary control-plane node in "addons-523976" cluster
	I1025 09:32:33.613075  294773 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:32:33.616147  294773 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:32:33.618954  294773 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:32:33.619042  294773 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:32:33.619278  294773 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 09:32:33.619294  294773 cache.go:58] Caching tarball of preloaded images
	I1025 09:32:33.619384  294773 preload.go:233] Found /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 09:32:33.619394  294773 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:32:33.619732  294773 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/config.json ...
	I1025 09:32:33.619752  294773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/config.json: {Name:mkec784ce2da4db8900e08806a3e0bbaa1dadf28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:33.635948  294773 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1025 09:32:33.636108  294773 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1025 09:32:33.636133  294773 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1025 09:32:33.636142  294773 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1025 09:32:33.636151  294773 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1025 09:32:33.636156  294773 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1025 09:32:51.684840  294773 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1025 09:32:51.684878  294773 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:32:51.684908  294773 start.go:360] acquireMachinesLock for addons-523976: {Name:mk120d50a90dba65a5a199c912429594e3c4a035 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:32:51.685044  294773 start.go:364] duration metric: took 117.918µs to acquireMachinesLock for "addons-523976"
	I1025 09:32:51.685070  294773 start.go:93] Provisioning new machine with config: &{Name:addons-523976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-523976 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:32:51.685141  294773 start.go:125] createHost starting for "" (driver="docker")
	I1025 09:32:51.688519  294773 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1025 09:32:51.688762  294773 start.go:159] libmachine.API.Create for "addons-523976" (driver="docker")
	I1025 09:32:51.688813  294773 client.go:168] LocalClient.Create starting
	I1025 09:32:51.688951  294773 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem
	I1025 09:32:52.573194  294773 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem
	I1025 09:32:53.171019  294773 cli_runner.go:164] Run: docker network inspect addons-523976 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 09:32:53.186438  294773 cli_runner.go:211] docker network inspect addons-523976 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 09:32:53.186539  294773 network_create.go:284] running [docker network inspect addons-523976] to gather additional debugging logs...
	I1025 09:32:53.186560  294773 cli_runner.go:164] Run: docker network inspect addons-523976
	W1025 09:32:53.202409  294773 cli_runner.go:211] docker network inspect addons-523976 returned with exit code 1
	I1025 09:32:53.202438  294773 network_create.go:287] error running [docker network inspect addons-523976]: docker network inspect addons-523976: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-523976 not found
	I1025 09:32:53.202450  294773 network_create.go:289] output of [docker network inspect addons-523976]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-523976 not found
	
	** /stderr **
	I1025 09:32:53.202540  294773 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:32:53.218441  294773 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001d12460}
	I1025 09:32:53.218494  294773 network_create.go:124] attempt to create docker network addons-523976 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1025 09:32:53.218548  294773 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-523976 addons-523976
	I1025 09:32:53.276339  294773 network_create.go:108] docker network addons-523976 192.168.49.0/24 created
	I1025 09:32:53.276375  294773 kic.go:121] calculated static IP "192.168.49.2" for the "addons-523976" container
	I1025 09:32:53.276448  294773 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 09:32:53.291670  294773 cli_runner.go:164] Run: docker volume create addons-523976 --label name.minikube.sigs.k8s.io=addons-523976 --label created_by.minikube.sigs.k8s.io=true
	I1025 09:32:53.313797  294773 oci.go:103] Successfully created a docker volume addons-523976
	I1025 09:32:53.313894  294773 cli_runner.go:164] Run: docker run --rm --name addons-523976-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-523976 --entrypoint /usr/bin/test -v addons-523976:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 09:32:55.423346  294773 cli_runner.go:217] Completed: docker run --rm --name addons-523976-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-523976 --entrypoint /usr/bin/test -v addons-523976:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (2.109397199s)
	I1025 09:32:55.423382  294773 oci.go:107] Successfully prepared a docker volume addons-523976
	I1025 09:32:55.423409  294773 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:32:55.423428  294773 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 09:32:55.423493  294773 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-523976:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 09:32:59.844161  294773 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-523976:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.420614087s)
	I1025 09:32:59.844198  294773 kic.go:203] duration metric: took 4.420764111s to extract preloaded images to volume ...
	W1025 09:32:59.844327  294773 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1025 09:32:59.844444  294773 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 09:32:59.896655  294773 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-523976 --name addons-523976 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-523976 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-523976 --network addons-523976 --ip 192.168.49.2 --volume addons-523976:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 09:33:00.427099  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Running}}
	I1025 09:33:00.455362  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:00.483255  294773 cli_runner.go:164] Run: docker exec addons-523976 stat /var/lib/dpkg/alternatives/iptables
	I1025 09:33:00.534519  294773 oci.go:144] the created container "addons-523976" has a running status.
	I1025 09:33:00.534550  294773 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa...
	I1025 09:33:00.778871  294773 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 09:33:00.804947  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:00.831583  294773 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 09:33:00.831660  294773 kic_runner.go:114] Args: [docker exec --privileged addons-523976 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 09:33:00.900382  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:00.932799  294773 machine.go:93] provisionDockerMachine start ...
	I1025 09:33:00.932893  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:00.954722  294773 main.go:141] libmachine: Using SSH client type: native
	I1025 09:33:00.955056  294773 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33142 <nil> <nil>}
	I1025 09:33:00.955067  294773 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:33:00.955664  294773 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50230->127.0.0.1:33142: read: connection reset by peer
	I1025 09:33:04.107581  294773 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-523976
	
	I1025 09:33:04.107669  294773 ubuntu.go:182] provisioning hostname "addons-523976"
	I1025 09:33:04.107762  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:04.125551  294773 main.go:141] libmachine: Using SSH client type: native
	I1025 09:33:04.125853  294773 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33142 <nil> <nil>}
	I1025 09:33:04.125867  294773 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-523976 && echo "addons-523976" | sudo tee /etc/hostname
	I1025 09:33:04.284352  294773 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-523976
	
	I1025 09:33:04.284478  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:04.301352  294773 main.go:141] libmachine: Using SSH client type: native
	I1025 09:33:04.301678  294773 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33142 <nil> <nil>}
	I1025 09:33:04.301696  294773 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-523976' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-523976/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-523976' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:33:04.451323  294773 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:33:04.451351  294773 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-292167/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-292167/.minikube}
	I1025 09:33:04.451380  294773 ubuntu.go:190] setting up certificates
	I1025 09:33:04.451391  294773 provision.go:84] configureAuth start
	I1025 09:33:04.451452  294773 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-523976
	I1025 09:33:04.467962  294773 provision.go:143] copyHostCerts
	I1025 09:33:04.468043  294773 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem (1123 bytes)
	I1025 09:33:04.468166  294773 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem (1675 bytes)
	I1025 09:33:04.468269  294773 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem (1082 bytes)
	I1025 09:33:04.468322  294773 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem org=jenkins.addons-523976 san=[127.0.0.1 192.168.49.2 addons-523976 localhost minikube]
	I1025 09:33:05.341230  294773 provision.go:177] copyRemoteCerts
	I1025 09:33:05.341302  294773 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:33:05.341344  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:05.358121  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:05.462894  294773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1025 09:33:05.480320  294773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 09:33:05.498197  294773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 09:33:05.515858  294773 provision.go:87] duration metric: took 1.0644412s to configureAuth
	I1025 09:33:05.515887  294773 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:33:05.516078  294773 config.go:182] Loaded profile config "addons-523976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:33:05.516191  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:05.532954  294773 main.go:141] libmachine: Using SSH client type: native
	I1025 09:33:05.533258  294773 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33142 <nil> <nil>}
	I1025 09:33:05.533279  294773 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:33:05.785106  294773 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:33:05.785131  294773 machine.go:96] duration metric: took 4.852312132s to provisionDockerMachine
	I1025 09:33:05.785143  294773 client.go:171] duration metric: took 14.096316457s to LocalClient.Create
	I1025 09:33:05.785156  294773 start.go:167] duration metric: took 14.096396015s to libmachine.API.Create "addons-523976"
	I1025 09:33:05.785163  294773 start.go:293] postStartSetup for "addons-523976" (driver="docker")
	I1025 09:33:05.785174  294773 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:33:05.785237  294773 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:33:05.785279  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:05.802590  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:05.907136  294773 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:33:05.910420  294773 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:33:05.910447  294773 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:33:05.910459  294773 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/addons for local assets ...
	I1025 09:33:05.910527  294773 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/files for local assets ...
	I1025 09:33:05.910553  294773 start.go:296] duration metric: took 125.384621ms for postStartSetup
	I1025 09:33:05.910865  294773 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-523976
	I1025 09:33:05.927404  294773 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/config.json ...
	I1025 09:33:05.927698  294773 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:33:05.927748  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:05.944087  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:06.044161  294773 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:33:06.048779  294773 start.go:128] duration metric: took 14.363623028s to createHost
	I1025 09:33:06.048802  294773 start.go:83] releasing machines lock for "addons-523976", held for 14.363749085s
	I1025 09:33:06.048876  294773 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-523976
	I1025 09:33:06.065552  294773 ssh_runner.go:195] Run: cat /version.json
	I1025 09:33:06.065607  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:06.065864  294773 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:33:06.065928  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:06.083908  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:06.085086  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:06.186790  294773 ssh_runner.go:195] Run: systemctl --version
	I1025 09:33:06.278512  294773 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:33:06.313895  294773 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:33:06.318084  294773 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:33:06.318158  294773 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:33:06.348132  294773 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1025 09:33:06.348210  294773 start.go:495] detecting cgroup driver to use...
	I1025 09:33:06.348262  294773 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 09:33:06.348379  294773 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:33:06.367411  294773 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:33:06.380270  294773 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:33:06.380337  294773 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:33:06.400378  294773 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:33:06.419143  294773 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:33:06.563069  294773 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:33:06.689621  294773 docker.go:234] disabling docker service ...
	I1025 09:33:06.689689  294773 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:33:06.710782  294773 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:33:06.723655  294773 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:33:06.842525  294773 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:33:06.963220  294773 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:33:06.977326  294773 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:33:06.991029  294773 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:33:06.991099  294773 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:33:07.000726  294773 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 09:33:07.000872  294773 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:33:07.011231  294773 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:33:07.022655  294773 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:33:07.032253  294773 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:33:07.040240  294773 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:33:07.049524  294773 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:33:07.063350  294773 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:33:07.072272  294773 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:33:07.079456  294773 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:33:07.086701  294773 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:33:07.195104  294773 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:33:07.318679  294773 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:33:07.318806  294773 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:33:07.322649  294773 start.go:563] Will wait 60s for crictl version
	I1025 09:33:07.322761  294773 ssh_runner.go:195] Run: which crictl
	I1025 09:33:07.326322  294773 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:33:07.362113  294773 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:33:07.362235  294773 ssh_runner.go:195] Run: crio --version
	I1025 09:33:07.390496  294773 ssh_runner.go:195] Run: crio --version
	I1025 09:33:07.424573  294773 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:33:07.427512  294773 cli_runner.go:164] Run: docker network inspect addons-523976 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:33:07.442659  294773 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1025 09:33:07.446413  294773 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:33:07.455846  294773 kubeadm.go:883] updating cluster {Name:addons-523976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-523976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:33:07.455970  294773 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:33:07.456031  294773 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:33:07.492027  294773 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:33:07.492050  294773 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:33:07.492106  294773 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:33:07.518291  294773 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:33:07.518316  294773 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:33:07.518325  294773 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1025 09:33:07.518430  294773 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-523976 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-523976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:33:07.518538  294773 ssh_runner.go:195] Run: crio config
	I1025 09:33:07.591047  294773 cni.go:84] Creating CNI manager for ""
	I1025 09:33:07.591072  294773 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:33:07.591092  294773 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:33:07.591115  294773 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-523976 NodeName:addons-523976 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:33:07.591251  294773 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-523976"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:33:07.591328  294773 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:33:07.598666  294773 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:33:07.598752  294773 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:33:07.605832  294773 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1025 09:33:07.617698  294773 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:33:07.630366  294773 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1025 09:33:07.642602  294773 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:33:07.646028  294773 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 09:33:07.655321  294773 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:33:07.775319  294773 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:33:07.795555  294773 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976 for IP: 192.168.49.2
	I1025 09:33:07.795622  294773 certs.go:195] generating shared ca certs ...
	I1025 09:33:07.795652  294773 certs.go:227] acquiring lock for ca certs: {Name:mk9438893ca511bbb3aa6154ee7b6f94b409696d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:33:07.796480  294773 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key
	I1025 09:33:08.161277  294773 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt ...
	I1025 09:33:08.161310  294773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt: {Name:mk790b2054fd2159ff24102bbc4a2b5c8a42b58f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:33:08.161548  294773 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key ...
	I1025 09:33:08.161564  294773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key: {Name:mkff04b43f00f5d3a44d154a58f9755924430f8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:33:08.161665  294773 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key
	I1025 09:33:08.439687  294773 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.crt ...
	I1025 09:33:08.439721  294773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.crt: {Name:mk99e8d68bb4e95b72f461ca6eaf7608c70c4c83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:33:08.439950  294773 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key ...
	I1025 09:33:08.439965  294773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key: {Name:mk49783463ce9c968396ea7320bf74f172bc8b6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:33:08.440642  294773 certs.go:257] generating profile certs ...
	I1025 09:33:08.440706  294773 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.key
	I1025 09:33:08.440724  294773 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.crt with IP's: []
	I1025 09:33:08.840504  294773 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.crt ...
	I1025 09:33:08.840540  294773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.crt: {Name:mka12a7197be577f0d247ec5e33034f94ec73765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:33:08.840741  294773 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.key ...
	I1025 09:33:08.840761  294773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.key: {Name:mka6fbf7422568b37cdf3ecd55d9d8bfbec3244b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:33:08.840876  294773 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/apiserver.key.e0e61d55
	I1025 09:33:08.840897  294773 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/apiserver.crt.e0e61d55 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1025 09:33:09.097293  294773 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/apiserver.crt.e0e61d55 ...
	I1025 09:33:09.097326  294773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/apiserver.crt.e0e61d55: {Name:mk9a82f852e7d9dff3c571e77d2147925f4263e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:33:09.098114  294773 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/apiserver.key.e0e61d55 ...
	I1025 09:33:09.098132  294773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/apiserver.key.e0e61d55: {Name:mk0015c9c86117cd916ff5bbcaf915901a07d7fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:33:09.098754  294773 certs.go:382] copying /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/apiserver.crt.e0e61d55 -> /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/apiserver.crt
	I1025 09:33:09.098854  294773 certs.go:386] copying /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/apiserver.key.e0e61d55 -> /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/apiserver.key
	I1025 09:33:09.098908  294773 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/proxy-client.key
	I1025 09:33:09.098931  294773 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/proxy-client.crt with IP's: []
	I1025 09:33:09.378501  294773 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/proxy-client.crt ...
	I1025 09:33:09.378533  294773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/proxy-client.crt: {Name:mkf2a022fd313ab9805f4455106606739edd2a84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:33:09.379348  294773 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/proxy-client.key ...
	I1025 09:33:09.379372  294773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/proxy-client.key: {Name:mkda3efe9771584f87be7ec433282a34292efc86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:33:09.380157  294773 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 09:33:09.380234  294773 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem (1082 bytes)
	I1025 09:33:09.380278  294773 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:33:09.380305  294773 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem (1675 bytes)
	I1025 09:33:09.380900  294773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:33:09.398481  294773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 09:33:09.417274  294773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:33:09.436078  294773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:33:09.453766  294773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1025 09:33:09.470953  294773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 09:33:09.488130  294773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:33:09.505544  294773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 09:33:09.522860  294773 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:33:09.540519  294773 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:33:09.553392  294773 ssh_runner.go:195] Run: openssl version
	I1025 09:33:09.560754  294773 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:33:09.569438  294773 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:33:09.573257  294773 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:33:09.573324  294773 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:33:09.614754  294773 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:33:09.623192  294773 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:33:09.626674  294773 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 09:33:09.626766  294773 kubeadm.go:400] StartCluster: {Name:addons-523976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-523976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:33:09.626843  294773 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:33:09.626901  294773 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:33:09.653582  294773 cri.go:89] found id: ""
	I1025 09:33:09.653663  294773 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:33:09.661255  294773 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 09:33:09.668722  294773 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 09:33:09.668832  294773 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 09:33:09.676595  294773 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 09:33:09.676616  294773 kubeadm.go:157] found existing configuration files:
	
	I1025 09:33:09.676670  294773 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 09:33:09.684094  294773 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 09:33:09.684179  294773 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 09:33:09.691410  294773 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 09:33:09.699015  294773 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 09:33:09.699081  294773 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 09:33:09.706003  294773 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 09:33:09.714205  294773 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 09:33:09.714296  294773 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 09:33:09.722043  294773 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 09:33:09.729836  294773 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 09:33:09.729943  294773 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 09:33:09.737074  294773 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 09:33:09.805217  294773 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1025 09:33:09.805466  294773 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1025 09:33:09.875725  294773 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 09:33:26.930017  294773 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 09:33:26.930081  294773 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 09:33:26.930175  294773 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 09:33:26.930237  294773 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1025 09:33:26.930279  294773 kubeadm.go:318] OS: Linux
	I1025 09:33:26.930330  294773 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 09:33:26.930384  294773 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1025 09:33:26.930437  294773 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 09:33:26.930490  294773 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 09:33:26.930544  294773 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 09:33:26.930597  294773 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 09:33:26.930647  294773 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 09:33:26.930700  294773 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 09:33:26.930751  294773 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1025 09:33:26.930829  294773 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 09:33:26.930930  294773 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 09:33:26.931026  294773 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 09:33:26.931093  294773 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 09:33:26.934032  294773 out.go:252]   - Generating certificates and keys ...
	I1025 09:33:26.934126  294773 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 09:33:26.934193  294773 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 09:33:26.934259  294773 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 09:33:26.934316  294773 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 09:33:26.934376  294773 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 09:33:26.934426  294773 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 09:33:26.934480  294773 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 09:33:26.934596  294773 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-523976 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 09:33:26.934648  294773 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 09:33:26.934763  294773 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-523976 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1025 09:33:26.934843  294773 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 09:33:26.934907  294773 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 09:33:26.934951  294773 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 09:33:26.935006  294773 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 09:33:26.935057  294773 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 09:33:26.935114  294773 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 09:33:26.935241  294773 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 09:33:26.935306  294773 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 09:33:26.935361  294773 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 09:33:26.935450  294773 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 09:33:26.935520  294773 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 09:33:26.940407  294773 out.go:252]   - Booting up control plane ...
	I1025 09:33:26.940582  294773 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 09:33:26.940745  294773 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 09:33:26.940832  294773 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 09:33:26.940970  294773 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 09:33:26.941089  294773 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 09:33:26.941202  294773 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 09:33:26.941298  294773 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 09:33:26.941341  294773 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 09:33:26.941514  294773 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 09:33:26.941647  294773 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 09:33:26.941735  294773 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 510.038928ms
	I1025 09:33:26.941862  294773 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 09:33:26.941986  294773 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1025 09:33:26.942111  294773 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 09:33:26.942231  294773 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 09:33:26.942335  294773 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.127628818s
	I1025 09:33:26.942424  294773 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.57127402s
	I1025 09:33:26.942501  294773 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.00151284s
	I1025 09:33:26.942611  294773 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 09:33:26.942778  294773 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 09:33:26.942884  294773 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 09:33:26.943091  294773 kubeadm.go:318] [mark-control-plane] Marking the node addons-523976 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 09:33:26.943190  294773 kubeadm.go:318] [bootstrap-token] Using token: 866dt2.1n9azi2o7n2cpdcp
	I1025 09:33:26.948198  294773 out.go:252]   - Configuring RBAC rules ...
	I1025 09:33:26.948374  294773 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 09:33:26.948499  294773 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 09:33:26.948667  294773 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 09:33:26.948814  294773 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 09:33:26.948943  294773 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 09:33:26.949054  294773 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 09:33:26.949208  294773 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 09:33:26.949271  294773 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 09:33:26.949343  294773 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 09:33:26.949385  294773 kubeadm.go:318] 
	I1025 09:33:26.949468  294773 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 09:33:26.949476  294773 kubeadm.go:318] 
	I1025 09:33:26.949559  294773 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 09:33:26.949572  294773 kubeadm.go:318] 
	I1025 09:33:26.949598  294773 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 09:33:26.949676  294773 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 09:33:26.949736  294773 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 09:33:26.949746  294773 kubeadm.go:318] 
	I1025 09:33:26.949816  294773 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 09:33:26.949825  294773 kubeadm.go:318] 
	I1025 09:33:26.949873  294773 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 09:33:26.949879  294773 kubeadm.go:318] 
	I1025 09:33:26.949951  294773 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 09:33:26.950070  294773 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 09:33:26.950154  294773 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 09:33:26.950163  294773 kubeadm.go:318] 
	I1025 09:33:26.950261  294773 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 09:33:26.950364  294773 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 09:33:26.950379  294773 kubeadm.go:318] 
	I1025 09:33:26.950490  294773 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 866dt2.1n9azi2o7n2cpdcp \
	I1025 09:33:26.950613  294773 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3fbde57ac93e639221cdfe118779e4c6254e9915ac4b07aafbe95a9f86bce8d0 \
	I1025 09:33:26.950640  294773 kubeadm.go:318] 	--control-plane 
	I1025 09:33:26.950647  294773 kubeadm.go:318] 
	I1025 09:33:26.950749  294773 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 09:33:26.950764  294773 kubeadm.go:318] 
	I1025 09:33:26.950867  294773 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 866dt2.1n9azi2o7n2cpdcp \
	I1025 09:33:26.950994  294773 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3fbde57ac93e639221cdfe118779e4c6254e9915ac4b07aafbe95a9f86bce8d0 
	I1025 09:33:26.951019  294773 cni.go:84] Creating CNI manager for ""
	I1025 09:33:26.951031  294773 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:33:26.956042  294773 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 09:33:26.959839  294773 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 09:33:26.964240  294773 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 09:33:26.964267  294773 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 09:33:26.979082  294773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 09:33:27.268927  294773 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 09:33:27.269079  294773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:27.269162  294773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-523976 minikube.k8s.io/updated_at=2025_10_25T09_33_27_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53 minikube.k8s.io/name=addons-523976 minikube.k8s.io/primary=true
	I1025 09:33:27.441354  294773 ops.go:34] apiserver oom_adj: -16
	I1025 09:33:27.448449  294773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:27.948520  294773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:28.449204  294773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:28.949248  294773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:29.449343  294773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:29.948564  294773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:30.448918  294773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:30.949143  294773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:31.449081  294773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:31.949222  294773 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 09:33:32.053363  294773 kubeadm.go:1113] duration metric: took 4.784329245s to wait for elevateKubeSystemPrivileges
	I1025 09:33:32.053397  294773 kubeadm.go:402] duration metric: took 22.426635075s to StartCluster
	I1025 09:33:32.053426  294773 settings.go:142] acquiring lock: {Name:mkea67a33832f2ea491a4c745ccd836174c61655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:33:32.053563  294773 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 09:33:32.053961  294773 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/kubeconfig: {Name:mk3dba620aff9c31a3afd43d9db4d9ff5be75367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:33:32.054166  294773 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:33:32.054320  294773 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 09:33:32.054566  294773 config.go:182] Loaded profile config "addons-523976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:33:32.054609  294773 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1025 09:33:32.054699  294773 addons.go:69] Setting yakd=true in profile "addons-523976"
	I1025 09:33:32.054717  294773 addons.go:238] Setting addon yakd=true in "addons-523976"
	I1025 09:33:32.054739  294773 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:33:32.055269  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:32.055768  294773 addons.go:69] Setting metrics-server=true in profile "addons-523976"
	I1025 09:33:32.055793  294773 addons.go:238] Setting addon metrics-server=true in "addons-523976"
	I1025 09:33:32.055821  294773 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:33:32.056219  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:32.056356  294773 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-523976"
	I1025 09:33:32.056394  294773 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-523976"
	I1025 09:33:32.056474  294773 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:33:32.056906  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:32.058473  294773 addons.go:69] Setting registry=true in profile "addons-523976"
	I1025 09:33:32.058530  294773 addons.go:238] Setting addon registry=true in "addons-523976"
	I1025 09:33:32.058575  294773 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:33:32.059015  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:32.059548  294773 addons.go:69] Setting registry-creds=true in profile "addons-523976"
	I1025 09:33:32.059568  294773 addons.go:238] Setting addon registry-creds=true in "addons-523976"
	I1025 09:33:32.059589  294773 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:33:32.059970  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:32.060110  294773 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-523976"
	I1025 09:33:32.060126  294773 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-523976"
	I1025 09:33:32.060145  294773 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:33:32.060517  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:32.072462  294773 addons.go:69] Setting cloud-spanner=true in profile "addons-523976"
	I1025 09:33:32.072548  294773 addons.go:238] Setting addon cloud-spanner=true in "addons-523976"
	I1025 09:33:32.072625  294773 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:33:32.073143  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:32.093069  294773 addons.go:69] Setting storage-provisioner=true in profile "addons-523976"
	I1025 09:33:32.093105  294773 addons.go:238] Setting addon storage-provisioner=true in "addons-523976"
	I1025 09:33:32.093141  294773 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:33:32.093708  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:32.097481  294773 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-523976"
	I1025 09:33:32.097521  294773 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-523976"
	I1025 09:33:32.097913  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:32.098593  294773 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-523976"
	I1025 09:33:32.098682  294773 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-523976"
	I1025 09:33:32.098750  294773 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:33:32.100860  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:32.111344  294773 addons.go:69] Setting volcano=true in profile "addons-523976"
	I1025 09:33:32.111382  294773 addons.go:238] Setting addon volcano=true in "addons-523976"
	I1025 09:33:32.111425  294773 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:33:32.111987  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:32.115220  294773 addons.go:69] Setting default-storageclass=true in profile "addons-523976"
	I1025 09:33:32.115255  294773 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-523976"
	I1025 09:33:32.115641  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:32.127288  294773 addons.go:69] Setting volumesnapshots=true in profile "addons-523976"
	I1025 09:33:32.127334  294773 addons.go:238] Setting addon volumesnapshots=true in "addons-523976"
	I1025 09:33:32.127384  294773 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:33:32.127971  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:32.143305  294773 out.go:179] * Verifying Kubernetes components...
	I1025 09:33:32.144576  294773 addons.go:69] Setting gcp-auth=true in profile "addons-523976"
	I1025 09:33:32.144616  294773 mustload.go:65] Loading cluster: addons-523976
	I1025 09:33:32.144997  294773 config.go:182] Loaded profile config "addons-523976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:33:32.145309  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:32.146920  294773 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:33:32.166473  294773 addons.go:69] Setting ingress=true in profile "addons-523976"
	I1025 09:33:32.166512  294773 addons.go:238] Setting addon ingress=true in "addons-523976"
	I1025 09:33:32.166558  294773 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:33:32.167029  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:32.202615  294773 addons.go:69] Setting ingress-dns=true in profile "addons-523976"
	I1025 09:33:32.202645  294773 addons.go:238] Setting addon ingress-dns=true in "addons-523976"
	I1025 09:33:32.202688  294773 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:33:32.203236  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:32.217561  294773 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1025 09:33:32.218418  294773 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1025 09:33:32.232166  294773 addons.go:69] Setting inspektor-gadget=true in profile "addons-523976"
	I1025 09:33:32.232197  294773 addons.go:238] Setting addon inspektor-gadget=true in "addons-523976"
	I1025 09:33:32.232234  294773 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:33:32.232695  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:32.245517  294773 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1025 09:33:32.250003  294773 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 09:33:32.250024  294773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1025 09:33:32.250085  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:32.250381  294773 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1025 09:33:32.256525  294773 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:33:32.259097  294773 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1025 09:33:32.259117  294773 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1025 09:33:32.259183  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:32.289582  294773 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1025 09:33:32.290454  294773 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-523976"
	I1025 09:33:32.295255  294773 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:33:32.295696  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:32.302437  294773 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:33:32.302741  294773 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1025 09:33:32.302925  294773 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1025 09:33:32.302969  294773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1025 09:33:32.303059  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:32.290547  294773 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1025 09:33:32.331291  294773 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:33:32.331309  294773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:33:32.331452  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:32.290581  294773 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1025 09:33:32.331639  294773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1025 09:33:32.331687  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:32.352832  294773 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1025 09:33:32.352856  294773 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1025 09:33:32.352919  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:32.290585  294773 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1025 09:33:32.361202  294773 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1025 09:33:32.361229  294773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1025 09:33:32.361300  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	W1025 09:33:32.386808  294773 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1025 09:33:32.387138  294773 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1025 09:33:32.387197  294773 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1025 09:33:32.387293  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:32.410055  294773 out.go:179]   - Using image docker.io/registry:3.0.0
	I1025 09:33:32.412958  294773 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1025 09:33:32.412980  294773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1025 09:33:32.413054  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:32.432847  294773 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1025 09:33:32.443257  294773 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1025 09:33:32.444620  294773 addons.go:238] Setting addon default-storageclass=true in "addons-523976"
	I1025 09:33:32.444655  294773 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:33:32.445047  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:32.456444  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:32.490821  294773 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1025 09:33:32.494851  294773 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1025 09:33:32.495185  294773 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1025 09:33:32.497167  294773 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1025 09:33:32.497252  294773 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1025 09:33:32.508885  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:32.520285  294773 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1025 09:33:32.520459  294773 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1025 09:33:32.525484  294773 out.go:179]   - Using image docker.io/busybox:stable
	I1025 09:33:32.525683  294773 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1025 09:33:32.525707  294773 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1025 09:33:32.525776  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:32.525959  294773 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 09:33:32.528382  294773 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1025 09:33:32.528508  294773 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 09:33:32.529573  294773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1025 09:33:32.529647  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:32.544121  294773 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 09:33:32.547781  294773 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 09:33:32.547804  294773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1025 09:33:32.547870  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:32.568589  294773 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 09:33:32.568613  294773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1025 09:33:32.568681  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:32.579772  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:32.580575  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:32.581725  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:32.584281  294773 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1025 09:33:32.587239  294773 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1025 09:33:32.590377  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:32.594638  294773 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1025 09:33:32.594662  294773 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1025 09:33:32.594729  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:32.626207  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:32.626203  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:32.646016  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:32.692261  294773 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:33:32.692284  294773 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:33:32.692346  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:32.692570  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:32.726448  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:32.739888  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	W1025 09:33:32.743812  294773 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1025 09:33:32.743865  294773 retry.go:31] will retry after 206.867614ms: ssh: handshake failed: EOF
	I1025 09:33:32.752522  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:32.757258  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:32.758152  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	W1025 09:33:32.759626  294773 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1025 09:33:32.759648  294773 retry.go:31] will retry after 206.364231ms: ssh: handshake failed: EOF
	I1025 09:33:32.812832  294773 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:33:32.813017  294773 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	W1025 09:33:32.967705  294773 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1025 09:33:32.967779  294773 retry.go:31] will retry after 334.533988ms: ssh: handshake failed: EOF
	I1025 09:33:33.080232  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:33:33.113535  294773 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1025 09:33:33.113560  294773 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1025 09:33:33.153948  294773 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1025 09:33:33.153969  294773 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1025 09:33:33.164358  294773 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1025 09:33:33.164382  294773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1025 09:33:33.206374  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1025 09:33:33.252061  294773 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1025 09:33:33.252127  294773 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1025 09:33:33.253837  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1025 09:33:33.272028  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1025 09:33:33.289779  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1025 09:33:33.299859  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1025 09:33:33.300139  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1025 09:33:33.316538  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1025 09:33:33.337565  294773 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1025 09:33:33.337593  294773 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1025 09:33:33.366719  294773 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1025 09:33:33.366743  294773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1025 09:33:33.377174  294773 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1025 09:33:33.377199  294773 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1025 09:33:33.380190  294773 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:33.380212  294773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1025 09:33:33.395105  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:33:33.402997  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1025 09:33:33.523021  294773 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1025 09:33:33.523095  294773 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1025 09:33:33.575864  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:33.579948  294773 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1025 09:33:33.580011  294773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1025 09:33:33.610056  294773 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1025 09:33:33.610129  294773 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1025 09:33:33.740012  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1025 09:33:33.798316  294773 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1025 09:33:33.798389  294773 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1025 09:33:33.850858  294773 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 09:33:33.850937  294773 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1025 09:33:33.929625  294773 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1025 09:33:33.929696  294773 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1025 09:33:33.957569  294773 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1025 09:33:33.957644  294773 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1025 09:33:34.014740  294773 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.201690818s)
	I1025 09:33:34.014891  294773 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1025 09:33:34.014838  294773 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.201976425s)
	I1025 09:33:34.015744  294773 node_ready.go:35] waiting up to 6m0s for node "addons-523976" to be "Ready" ...
	I1025 09:33:34.035099  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1025 09:33:34.212521  294773 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1025 09:33:34.212593  294773 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1025 09:33:34.276532  294773 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 09:33:34.276604  294773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1025 09:33:34.519881  294773 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-523976" context rescaled to 1 replicas
	I1025 09:33:34.525590  294773 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1025 09:33:34.525668  294773 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1025 09:33:34.549903  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 09:33:34.641385  294773 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1025 09:33:34.641455  294773 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1025 09:33:34.893015  294773 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1025 09:33:34.893039  294773 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1025 09:33:35.023819  294773 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1025 09:33:35.023840  294773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1025 09:33:35.240754  294773 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1025 09:33:35.240835  294773 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1025 09:33:35.271708  294773 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1025 09:33:35.271783  294773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1025 09:33:35.286504  294773 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1025 09:33:35.286574  294773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1025 09:33:35.509041  294773 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1025 09:33:35.509120  294773 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1025 09:33:35.726811  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1025 09:33:36.035834  294773 node_ready.go:57] node "addons-523976" has "Ready":"False" status (will retry)
	I1025 09:33:36.748844  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.542382309s)
	I1025 09:33:36.748983  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.668724937s)
	I1025 09:33:37.911265  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.657355025s)
	I1025 09:33:37.911450  294773 addons.go:479] Verifying addon ingress=true in "addons-523976"
	I1025 09:33:37.911477  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.611529219s)
	I1025 09:33:37.911528  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.611350436s)
	I1025 09:33:37.911557  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.594998105s)
	I1025 09:33:37.911590  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.516463867s)
	I1025 09:33:37.911815  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.508791988s)
	I1025 09:33:37.911373  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.639246674s)
	I1025 09:33:37.911967  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.336025319s)
	W1025 09:33:37.911994  294773 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:37.912011  294773 retry.go:31] will retry after 141.929822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:37.912053  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.171961658s)
	I1025 09:33:37.911427  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.621578043s)
	I1025 09:33:37.912161  294773 addons.go:479] Verifying addon registry=true in "addons-523976"
	I1025 09:33:37.912283  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (3.87707482s)
	I1025 09:33:37.914308  294773 addons.go:479] Verifying addon metrics-server=true in "addons-523976"
	I1025 09:33:37.912363  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.362388455s)
	W1025 09:33:37.914359  294773 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 09:33:37.914380  294773 retry.go:31] will retry after 341.272991ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1025 09:33:37.915228  294773 out.go:179] * Verifying ingress addon...
	I1025 09:33:37.915234  294773 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-523976 service yakd-dashboard -n yakd-dashboard
	
	I1025 09:33:37.917112  294773 out.go:179] * Verifying registry addon...
	I1025 09:33:37.919937  294773 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1025 09:33:37.919937  294773 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1025 09:33:37.936977  294773 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1025 09:33:37.936998  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:37.937520  294773 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1025 09:33:37.937534  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:33:37.954997  294773 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1025 09:33:38.054900  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:38.256243  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1025 09:33:38.436650  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:38.437057  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:38.469256  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.742354393s)
	I1025 09:33:38.469337  294773 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-523976"
	I1025 09:33:38.472387  294773 out.go:179] * Verifying csi-hostpath-driver addon...
	I1025 09:33:38.476035  294773 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1025 09:33:38.480369  294773 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1025 09:33:38.480436  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:38.522219  294773 node_ready.go:57] node "addons-523976" has "Ready":"False" status (will retry)
	I1025 09:33:38.941258  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:38.941476  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:38.980002  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:39.224286  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.169295623s)
	W1025 09:33:39.224335  294773 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:39.224355  294773 retry.go:31] will retry after 328.381467ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:39.424080  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:39.424345  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:39.523966  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:39.552959  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:39.868637  294773 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1025 09:33:39.868730  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:39.893891  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:39.926277  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:39.926871  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:39.979675  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:40.023526  294773 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1025 09:33:40.049392  294773 addons.go:238] Setting addon gcp-auth=true in "addons-523976"
	I1025 09:33:40.049525  294773 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:33:40.050073  294773 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:33:40.072928  294773 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1025 09:33:40.072986  294773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:33:40.093942  294773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:33:40.424213  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:40.424913  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:40.479775  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:40.923749  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:40.924273  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:40.981345  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:41.019253  294773 node_ready.go:57] node "addons-523976" has "Ready":"False" status (will retry)
	I1025 09:33:41.322567  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.066210094s)
	I1025 09:33:41.322607  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.769611433s)
	I1025 09:33:41.322667  294773 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.24971705s)
	W1025 09:33:41.322677  294773 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:41.322739  294773 retry.go:31] will retry after 529.604297ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:41.325986  294773 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1025 09:33:41.328817  294773 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1025 09:33:41.331707  294773 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1025 09:33:41.331734  294773 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1025 09:33:41.345731  294773 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1025 09:33:41.345808  294773 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1025 09:33:41.360431  294773 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 09:33:41.360454  294773 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1025 09:33:41.379918  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1025 09:33:41.425113  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:41.425830  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:41.479742  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:41.851278  294773 addons.go:479] Verifying addon gcp-auth=true in "addons-523976"
	I1025 09:33:41.852517  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:41.854643  294773 out.go:179] * Verifying gcp-auth addon...
	I1025 09:33:41.858300  294773 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1025 09:33:41.873753  294773 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1025 09:33:41.873773  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:41.970366  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:41.970826  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:41.979702  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:42.362003  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:42.424177  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:42.424930  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:42.479210  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:42.700233  294773 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:42.700263  294773 retry.go:31] will retry after 1.042193162s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:42.861895  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:42.924767  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:42.925102  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:42.979660  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:43.361628  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:43.423957  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:43.424101  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:43.478968  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:43.518656  294773 node_ready.go:57] node "addons-523976" has "Ready":"False" status (will retry)
	I1025 09:33:43.742886  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:43.862351  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:43.924400  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:43.924996  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:43.981196  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:44.362156  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:44.424289  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:44.424550  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:44.478914  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:44.583667  294773 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:44.583751  294773 retry.go:31] will retry after 1.607103469s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:44.861614  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:44.923636  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:44.924067  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:44.979933  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:45.362701  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:45.424124  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:45.424256  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:45.479610  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:45.519690  294773 node_ready.go:57] node "addons-523976" has "Ready":"False" status (will retry)
	I1025 09:33:45.861833  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:45.923851  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:45.924175  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:45.978789  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:46.191967  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:46.361932  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:46.423344  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:46.423699  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:46.479796  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:46.861676  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:46.924512  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:46.924718  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:46.979107  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:46.998042  294773 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:46.998077  294773 retry.go:31] will retry after 2.121529079s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:47.361907  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:47.424011  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:47.424357  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:47.479279  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:47.520373  294773 node_ready.go:57] node "addons-523976" has "Ready":"False" status (will retry)
	I1025 09:33:47.861400  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:47.923168  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:47.923575  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:47.979401  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:48.361218  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:48.423613  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:48.423771  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:48.479812  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:48.861654  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:48.923817  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:48.924241  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:48.978895  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:49.120642  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:49.362169  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:49.424459  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:49.424747  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:49.479918  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:49.862193  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:49.923518  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:49.923998  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1025 09:33:49.928377  294773 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:49.928407  294773 retry.go:31] will retry after 2.976947527s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:49.979239  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:50.019359  294773 node_ready.go:57] node "addons-523976" has "Ready":"False" status (will retry)
	I1025 09:33:50.360868  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:50.422926  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:50.423371  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:50.479128  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:50.862064  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:50.923253  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:50.923428  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:50.979186  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:51.361259  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:51.423729  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:51.423855  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:51.479485  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:51.861138  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:51.923001  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:51.923067  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:51.978940  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:52.019493  294773 node_ready.go:57] node "addons-523976" has "Ready":"False" status (will retry)
	I1025 09:33:52.361488  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:52.423198  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:52.423342  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:52.479471  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:52.861007  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:52.906159  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:52.926083  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:52.927049  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:52.979261  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:53.362650  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:53.425461  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:53.425718  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:53.480570  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:53.757648  294773 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:53.757730  294773 retry.go:31] will retry after 2.276492655s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:53.861580  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:53.924088  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:53.924182  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:53.979188  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:54.361743  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:54.423851  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:54.424655  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:54.479778  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:54.519374  294773 node_ready.go:57] node "addons-523976" has "Ready":"False" status (will retry)
	I1025 09:33:54.861725  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:54.924449  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:54.924829  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:54.980891  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:55.361890  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:55.423068  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:55.423337  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:55.479370  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:55.861212  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:55.925103  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:55.925660  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:55.979576  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:56.034848  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:33:56.361173  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:56.424868  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:56.425230  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:56.479242  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:56.819866  294773 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:56.819896  294773 retry.go:31] will retry after 8.994283387s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:33:56.862253  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:56.923260  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:56.923294  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:56.979478  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:57.019315  294773 node_ready.go:57] node "addons-523976" has "Ready":"False" status (will retry)
	I1025 09:33:57.361300  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:57.423237  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:57.423749  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:57.479581  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:57.861088  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:57.923687  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:57.923770  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:57.979664  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:58.361534  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:58.423944  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:58.424241  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:58.478892  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:58.861882  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:58.923637  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:58.923894  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:58.979727  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:33:59.019426  294773 node_ready.go:57] node "addons-523976" has "Ready":"False" status (will retry)
	I1025 09:33:59.373745  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:59.423651  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:59.423964  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:59.479963  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:33:59.861716  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:33:59.923910  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:33:59.924217  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:33:59.979845  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:00.369320  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:00.424360  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:00.425419  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:00.487592  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:00.861920  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:00.924330  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:00.925103  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:00.980169  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:01.361442  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:01.423843  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:01.424005  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:01.479792  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:34:01.518683  294773 node_ready.go:57] node "addons-523976" has "Ready":"False" status (will retry)
	I1025 09:34:01.863357  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:01.923468  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:01.923857  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:01.979669  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:02.361823  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:02.424395  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:02.424942  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:02.479994  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:02.861772  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:02.924055  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:02.924453  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:02.979658  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:03.361609  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:03.423611  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:03.423900  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:03.481107  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:34:03.519128  294773 node_ready.go:57] node "addons-523976" has "Ready":"False" status (will retry)
	I1025 09:34:03.861004  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:03.923267  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:03.923666  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:03.979436  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:04.361317  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:04.423505  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:04.423912  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:04.479765  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:04.862201  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:04.923608  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:04.923674  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:04.979984  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:05.363353  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:05.424136  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:05.424802  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:05.479618  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:34:05.520346  294773 node_ready.go:57] node "addons-523976" has "Ready":"False" status (will retry)
	I1025 09:34:05.814901  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:34:05.862166  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:05.924381  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:05.924700  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:05.980210  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:06.363355  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:06.423360  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:06.423750  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:06.479690  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:34:06.624889  294773 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:34:06.624921  294773 retry.go:31] will retry after 8.085733239s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:34:06.862084  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:06.923788  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:06.923922  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:06.979854  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:07.362418  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:07.423586  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:07.423791  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:07.479883  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:07.861582  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:07.923639  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:07.923792  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:07.979719  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:34:08.018942  294773 node_ready.go:57] node "addons-523976" has "Ready":"False" status (will retry)
	I1025 09:34:08.363077  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:08.423300  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:08.423682  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:08.479258  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:08.862015  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:08.923245  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:08.923608  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:08.980050  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:09.362235  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:09.423804  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:09.424457  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:09.479536  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:09.861320  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:09.923411  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:09.923696  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:09.979746  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:34:10.019141  294773 node_ready.go:57] node "addons-523976" has "Ready":"False" status (will retry)
	I1025 09:34:10.362013  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:10.423363  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:10.425567  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:10.479664  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:10.861475  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:10.923686  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:10.923954  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:10.979832  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:11.361180  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:11.423273  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:11.423408  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:11.479937  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:11.861800  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:11.924579  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:11.924701  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:11.979391  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1025 09:34:12.019458  294773 node_ready.go:57] node "addons-523976" has "Ready":"False" status (will retry)
	I1025 09:34:12.361557  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:12.465310  294773 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1025 09:34:12.465336  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:12.465492  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:12.510216  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:12.538166  294773 node_ready.go:49] node "addons-523976" is "Ready"
	I1025 09:34:12.538197  294773 node_ready.go:38] duration metric: took 38.522425157s for node "addons-523976" to be "Ready" ...
	I1025 09:34:12.538212  294773 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:34:12.538273  294773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:34:12.557278  294773 api_server.go:72] duration metric: took 40.50307676s to wait for apiserver process to appear ...
	I1025 09:34:12.557354  294773 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:34:12.557389  294773 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1025 09:34:12.581479  294773 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1025 09:34:12.583781  294773 api_server.go:141] control plane version: v1.34.1
	I1025 09:34:12.583851  294773 api_server.go:131] duration metric: took 26.476299ms to wait for apiserver health ...
	I1025 09:34:12.583876  294773 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:34:12.609509  294773 system_pods.go:59] 19 kube-system pods found
	I1025 09:34:12.609613  294773 system_pods.go:61] "coredns-66bc5c9577-7ztdw" [2fc532e5-2871-43b2-a9ca-2155676f95a1] Pending
	I1025 09:34:12.609634  294773 system_pods.go:61] "csi-hostpath-attacher-0" [3cfeea01-8814-41b1-9059-434f7cede325] Pending
	I1025 09:34:12.609654  294773 system_pods.go:61] "csi-hostpath-resizer-0" [88a5a5d7-037a-4b2f-a2d7-b8f273e0c3cc] Pending
	I1025 09:34:12.609691  294773 system_pods.go:61] "csi-hostpathplugin-jzdxn" [a3e42799-cb35-440b-8161-0292d2e47360] Pending
	I1025 09:34:12.609712  294773 system_pods.go:61] "etcd-addons-523976" [51110d95-849a-4d23-b45e-1b9b5554f090] Running
	I1025 09:34:12.609736  294773 system_pods.go:61] "kindnet-x2lt6" [00ae329b-d096-4f67-b8b1-e27b0609b40c] Running
	I1025 09:34:12.609773  294773 system_pods.go:61] "kube-apiserver-addons-523976" [71f810d6-18ac-4eb7-bede-9fab0edd3e35] Running
	I1025 09:34:12.609795  294773 system_pods.go:61] "kube-controller-manager-addons-523976" [088d16b3-728c-4299-aa14-d6d24747e8ec] Running
	I1025 09:34:12.609834  294773 system_pods.go:61] "kube-ingress-dns-minikube" [cd99163c-521e-4204-82e9-042f4ced1951] Pending
	I1025 09:34:12.609857  294773 system_pods.go:61] "kube-proxy-sfnch" [d96f2ba5-9b51-43c5-bfdc-8b1e254d3f7c] Running
	I1025 09:34:12.609876  294773 system_pods.go:61] "kube-scheduler-addons-523976" [d4e796bf-6752-48ab-ae77-46d95e85b096] Running
	I1025 09:34:12.609910  294773 system_pods.go:61] "metrics-server-85b7d694d7-rvf2w" [3e21a96e-44a3-4a2a-832b-942702186126] Pending
	I1025 09:34:12.609935  294773 system_pods.go:61] "nvidia-device-plugin-daemonset-bc95g" [7c8dd890-af51-4a2a-97fb-d8cdb2ca8d45] Pending
	I1025 09:34:12.609954  294773 system_pods.go:61] "registry-6b586f9694-zbqtr" [85df0936-f76b-4735-8421-1890c338b1a7] Pending
	I1025 09:34:12.609996  294773 system_pods.go:61] "registry-creds-764b6fb674-8qvgv" [b074d8cf-486c-474b-868b-534d304e5e83] Pending
	I1025 09:34:12.610021  294773 system_pods.go:61] "registry-proxy-2kb6l" [f1663764-ddd7-4002-a86b-adaa75c4a254] Pending
	I1025 09:34:12.610041  294773 system_pods.go:61] "snapshot-controller-7d9fbc56b8-225jc" [612be4eb-ba87-4e21-b937-6aa4eba52f1c] Pending
	I1025 09:34:12.610076  294773 system_pods.go:61] "snapshot-controller-7d9fbc56b8-ml7zh" [bee5f140-16eb-45aa-b046-c1dfa54d55f0] Pending
	I1025 09:34:12.610107  294773 system_pods.go:61] "storage-provisioner" [318c1c01-6ea9-4c3d-b9e0-1f4f15ec8357] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:34:12.610133  294773 system_pods.go:74] duration metric: took 26.235869ms to wait for pod list to return data ...
	I1025 09:34:12.610179  294773 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:34:12.630703  294773 default_sa.go:45] found service account: "default"
	I1025 09:34:12.630778  294773 default_sa.go:55] duration metric: took 20.5781ms for default service account to be created ...
	I1025 09:34:12.630801  294773 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:34:12.662355  294773 system_pods.go:86] 19 kube-system pods found
	I1025 09:34:12.662436  294773 system_pods.go:89] "coredns-66bc5c9577-7ztdw" [2fc532e5-2871-43b2-a9ca-2155676f95a1] Pending
	I1025 09:34:12.662457  294773 system_pods.go:89] "csi-hostpath-attacher-0" [3cfeea01-8814-41b1-9059-434f7cede325] Pending
	I1025 09:34:12.662480  294773 system_pods.go:89] "csi-hostpath-resizer-0" [88a5a5d7-037a-4b2f-a2d7-b8f273e0c3cc] Pending
	I1025 09:34:12.662517  294773 system_pods.go:89] "csi-hostpathplugin-jzdxn" [a3e42799-cb35-440b-8161-0292d2e47360] Pending
	I1025 09:34:12.662541  294773 system_pods.go:89] "etcd-addons-523976" [51110d95-849a-4d23-b45e-1b9b5554f090] Running
	I1025 09:34:12.662564  294773 system_pods.go:89] "kindnet-x2lt6" [00ae329b-d096-4f67-b8b1-e27b0609b40c] Running
	I1025 09:34:12.662601  294773 system_pods.go:89] "kube-apiserver-addons-523976" [71f810d6-18ac-4eb7-bede-9fab0edd3e35] Running
	I1025 09:34:12.662627  294773 system_pods.go:89] "kube-controller-manager-addons-523976" [088d16b3-728c-4299-aa14-d6d24747e8ec] Running
	I1025 09:34:12.662653  294773 system_pods.go:89] "kube-ingress-dns-minikube" [cd99163c-521e-4204-82e9-042f4ced1951] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 09:34:12.662688  294773 system_pods.go:89] "kube-proxy-sfnch" [d96f2ba5-9b51-43c5-bfdc-8b1e254d3f7c] Running
	I1025 09:34:12.662713  294773 system_pods.go:89] "kube-scheduler-addons-523976" [d4e796bf-6752-48ab-ae77-46d95e85b096] Running
	I1025 09:34:12.662734  294773 system_pods.go:89] "metrics-server-85b7d694d7-rvf2w" [3e21a96e-44a3-4a2a-832b-942702186126] Pending
	I1025 09:34:12.662771  294773 system_pods.go:89] "nvidia-device-plugin-daemonset-bc95g" [7c8dd890-af51-4a2a-97fb-d8cdb2ca8d45] Pending
	I1025 09:34:12.662795  294773 system_pods.go:89] "registry-6b586f9694-zbqtr" [85df0936-f76b-4735-8421-1890c338b1a7] Pending
	I1025 09:34:12.662815  294773 system_pods.go:89] "registry-creds-764b6fb674-8qvgv" [b074d8cf-486c-474b-868b-534d304e5e83] Pending
	I1025 09:34:12.662854  294773 system_pods.go:89] "registry-proxy-2kb6l" [f1663764-ddd7-4002-a86b-adaa75c4a254] Pending
	I1025 09:34:12.662878  294773 system_pods.go:89] "snapshot-controller-7d9fbc56b8-225jc" [612be4eb-ba87-4e21-b937-6aa4eba52f1c] Pending
	I1025 09:34:12.662898  294773 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ml7zh" [bee5f140-16eb-45aa-b046-c1dfa54d55f0] Pending
	I1025 09:34:12.662935  294773 system_pods.go:89] "storage-provisioner" [318c1c01-6ea9-4c3d-b9e0-1f4f15ec8357] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:34:12.662970  294773 retry.go:31] will retry after 216.383731ms: missing components: kube-dns
	I1025 09:34:12.880850  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:12.931917  294773 system_pods.go:86] 19 kube-system pods found
	I1025 09:34:12.931999  294773 system_pods.go:89] "coredns-66bc5c9577-7ztdw" [2fc532e5-2871-43b2-a9ca-2155676f95a1] Pending
	I1025 09:34:12.932023  294773 system_pods.go:89] "csi-hostpath-attacher-0" [3cfeea01-8814-41b1-9059-434f7cede325] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 09:34:12.932064  294773 system_pods.go:89] "csi-hostpath-resizer-0" [88a5a5d7-037a-4b2f-a2d7-b8f273e0c3cc] Pending
	I1025 09:34:12.932092  294773 system_pods.go:89] "csi-hostpathplugin-jzdxn" [a3e42799-cb35-440b-8161-0292d2e47360] Pending
	I1025 09:34:12.932113  294773 system_pods.go:89] "etcd-addons-523976" [51110d95-849a-4d23-b45e-1b9b5554f090] Running
	I1025 09:34:12.932151  294773 system_pods.go:89] "kindnet-x2lt6" [00ae329b-d096-4f67-b8b1-e27b0609b40c] Running
	I1025 09:34:12.932176  294773 system_pods.go:89] "kube-apiserver-addons-523976" [71f810d6-18ac-4eb7-bede-9fab0edd3e35] Running
	I1025 09:34:12.932197  294773 system_pods.go:89] "kube-controller-manager-addons-523976" [088d16b3-728c-4299-aa14-d6d24747e8ec] Running
	I1025 09:34:12.932236  294773 system_pods.go:89] "kube-ingress-dns-minikube" [cd99163c-521e-4204-82e9-042f4ced1951] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 09:34:12.932261  294773 system_pods.go:89] "kube-proxy-sfnch" [d96f2ba5-9b51-43c5-bfdc-8b1e254d3f7c] Running
	I1025 09:34:12.932285  294773 system_pods.go:89] "kube-scheduler-addons-523976" [d4e796bf-6752-48ab-ae77-46d95e85b096] Running
	I1025 09:34:12.932324  294773 system_pods.go:89] "metrics-server-85b7d694d7-rvf2w" [3e21a96e-44a3-4a2a-832b-942702186126] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 09:34:12.932352  294773 system_pods.go:89] "nvidia-device-plugin-daemonset-bc95g" [7c8dd890-af51-4a2a-97fb-d8cdb2ca8d45] Pending
	I1025 09:34:12.932374  294773 system_pods.go:89] "registry-6b586f9694-zbqtr" [85df0936-f76b-4735-8421-1890c338b1a7] Pending
	I1025 09:34:12.932421  294773 system_pods.go:89] "registry-creds-764b6fb674-8qvgv" [b074d8cf-486c-474b-868b-534d304e5e83] Pending
	I1025 09:34:12.932459  294773 system_pods.go:89] "registry-proxy-2kb6l" [f1663764-ddd7-4002-a86b-adaa75c4a254] Pending
	I1025 09:34:12.932508  294773 system_pods.go:89] "snapshot-controller-7d9fbc56b8-225jc" [612be4eb-ba87-4e21-b937-6aa4eba52f1c] Pending
	I1025 09:34:12.932544  294773 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ml7zh" [bee5f140-16eb-45aa-b046-c1dfa54d55f0] Pending
	I1025 09:34:12.932581  294773 system_pods.go:89] "storage-provisioner" [318c1c01-6ea9-4c3d-b9e0-1f4f15ec8357] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:34:12.932616  294773 retry.go:31] will retry after 234.578617ms: missing components: kube-dns
	I1025 09:34:12.951849  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:12.951944  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:12.984623  294773 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1025 09:34:12.984645  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:13.174783  294773 system_pods.go:86] 19 kube-system pods found
	I1025 09:34:13.174902  294773 system_pods.go:89] "coredns-66bc5c9577-7ztdw" [2fc532e5-2871-43b2-a9ca-2155676f95a1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 09:34:13.174931  294773 system_pods.go:89] "csi-hostpath-attacher-0" [3cfeea01-8814-41b1-9059-434f7cede325] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 09:34:13.174970  294773 system_pods.go:89] "csi-hostpath-resizer-0" [88a5a5d7-037a-4b2f-a2d7-b8f273e0c3cc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 09:34:13.175001  294773 system_pods.go:89] "csi-hostpathplugin-jzdxn" [a3e42799-cb35-440b-8161-0292d2e47360] Pending
	I1025 09:34:13.175025  294773 system_pods.go:89] "etcd-addons-523976" [51110d95-849a-4d23-b45e-1b9b5554f090] Running
	I1025 09:34:13.175058  294773 system_pods.go:89] "kindnet-x2lt6" [00ae329b-d096-4f67-b8b1-e27b0609b40c] Running
	I1025 09:34:13.175081  294773 system_pods.go:89] "kube-apiserver-addons-523976" [71f810d6-18ac-4eb7-bede-9fab0edd3e35] Running
	I1025 09:34:13.175104  294773 system_pods.go:89] "kube-controller-manager-addons-523976" [088d16b3-728c-4299-aa14-d6d24747e8ec] Running
	I1025 09:34:13.175144  294773 system_pods.go:89] "kube-ingress-dns-minikube" [cd99163c-521e-4204-82e9-042f4ced1951] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 09:34:13.175228  294773 system_pods.go:89] "kube-proxy-sfnch" [d96f2ba5-9b51-43c5-bfdc-8b1e254d3f7c] Running
	I1025 09:34:13.175250  294773 system_pods.go:89] "kube-scheduler-addons-523976" [d4e796bf-6752-48ab-ae77-46d95e85b096] Running
	I1025 09:34:13.175273  294773 system_pods.go:89] "metrics-server-85b7d694d7-rvf2w" [3e21a96e-44a3-4a2a-832b-942702186126] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 09:34:13.175309  294773 system_pods.go:89] "nvidia-device-plugin-daemonset-bc95g" [7c8dd890-af51-4a2a-97fb-d8cdb2ca8d45] Pending
	I1025 09:34:13.175335  294773 system_pods.go:89] "registry-6b586f9694-zbqtr" [85df0936-f76b-4735-8421-1890c338b1a7] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 09:34:13.175358  294773 system_pods.go:89] "registry-creds-764b6fb674-8qvgv" [b074d8cf-486c-474b-868b-534d304e5e83] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 09:34:13.175396  294773 system_pods.go:89] "registry-proxy-2kb6l" [f1663764-ddd7-4002-a86b-adaa75c4a254] Pending
	I1025 09:34:13.175422  294773 system_pods.go:89] "snapshot-controller-7d9fbc56b8-225jc" [612be4eb-ba87-4e21-b937-6aa4eba52f1c] Pending
	I1025 09:34:13.175445  294773 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ml7zh" [bee5f140-16eb-45aa-b046-c1dfa54d55f0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:34:13.175484  294773 system_pods.go:89] "storage-provisioner" [318c1c01-6ea9-4c3d-b9e0-1f4f15ec8357] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 09:34:13.175521  294773 retry.go:31] will retry after 436.812233ms: missing components: kube-dns
	I1025 09:34:13.367546  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:13.428639  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:13.429068  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:13.479741  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:13.627055  294773 system_pods.go:86] 19 kube-system pods found
	I1025 09:34:13.627169  294773 system_pods.go:89] "coredns-66bc5c9577-7ztdw" [2fc532e5-2871-43b2-a9ca-2155676f95a1] Running
	I1025 09:34:13.627234  294773 system_pods.go:89] "csi-hostpath-attacher-0" [3cfeea01-8814-41b1-9059-434f7cede325] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1025 09:34:13.627259  294773 system_pods.go:89] "csi-hostpath-resizer-0" [88a5a5d7-037a-4b2f-a2d7-b8f273e0c3cc] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1025 09:34:13.627306  294773 system_pods.go:89] "csi-hostpathplugin-jzdxn" [a3e42799-cb35-440b-8161-0292d2e47360] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1025 09:34:13.627331  294773 system_pods.go:89] "etcd-addons-523976" [51110d95-849a-4d23-b45e-1b9b5554f090] Running
	I1025 09:34:13.627355  294773 system_pods.go:89] "kindnet-x2lt6" [00ae329b-d096-4f67-b8b1-e27b0609b40c] Running
	I1025 09:34:13.627394  294773 system_pods.go:89] "kube-apiserver-addons-523976" [71f810d6-18ac-4eb7-bede-9fab0edd3e35] Running
	I1025 09:34:13.627425  294773 system_pods.go:89] "kube-controller-manager-addons-523976" [088d16b3-728c-4299-aa14-d6d24747e8ec] Running
	I1025 09:34:13.627450  294773 system_pods.go:89] "kube-ingress-dns-minikube" [cd99163c-521e-4204-82e9-042f4ced1951] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1025 09:34:13.627488  294773 system_pods.go:89] "kube-proxy-sfnch" [d96f2ba5-9b51-43c5-bfdc-8b1e254d3f7c] Running
	I1025 09:34:13.627514  294773 system_pods.go:89] "kube-scheduler-addons-523976" [d4e796bf-6752-48ab-ae77-46d95e85b096] Running
	I1025 09:34:13.627539  294773 system_pods.go:89] "metrics-server-85b7d694d7-rvf2w" [3e21a96e-44a3-4a2a-832b-942702186126] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1025 09:34:13.627564  294773 system_pods.go:89] "nvidia-device-plugin-daemonset-bc95g" [7c8dd890-af51-4a2a-97fb-d8cdb2ca8d45] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1025 09:34:13.627610  294773 system_pods.go:89] "registry-6b586f9694-zbqtr" [85df0936-f76b-4735-8421-1890c338b1a7] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1025 09:34:13.627638  294773 system_pods.go:89] "registry-creds-764b6fb674-8qvgv" [b074d8cf-486c-474b-868b-534d304e5e83] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1025 09:34:13.627718  294773 system_pods.go:89] "registry-proxy-2kb6l" [f1663764-ddd7-4002-a86b-adaa75c4a254] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1025 09:34:13.627755  294773 system_pods.go:89] "snapshot-controller-7d9fbc56b8-225jc" [612be4eb-ba87-4e21-b937-6aa4eba52f1c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:34:13.627783  294773 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ml7zh" [bee5f140-16eb-45aa-b046-c1dfa54d55f0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1025 09:34:13.627827  294773 system_pods.go:89] "storage-provisioner" [318c1c01-6ea9-4c3d-b9e0-1f4f15ec8357] Running
	I1025 09:34:13.627944  294773 system_pods.go:126] duration metric: took 997.120994ms to wait for k8s-apps to be running ...
	I1025 09:34:13.627975  294773 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:34:13.628219  294773 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:34:13.708636  294773 system_svc.go:56] duration metric: took 80.651978ms WaitForService to wait for kubelet
	I1025 09:34:13.708717  294773 kubeadm.go:586] duration metric: took 41.654518417s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:34:13.708752  294773 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:34:13.712276  294773 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 09:34:13.712356  294773 node_conditions.go:123] node cpu capacity is 2
	I1025 09:34:13.712384  294773 node_conditions.go:105] duration metric: took 3.608305ms to run NodePressure ...
	I1025 09:34:13.712409  294773 start.go:241] waiting for startup goroutines ...
	I1025 09:34:13.862032  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:13.924721  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:13.925118  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:13.981282  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:14.362888  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:14.462346  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:14.462560  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:14.479836  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:14.711114  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:34:14.862899  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:14.925231  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:14.925557  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:14.980730  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:15.378559  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:15.478199  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:15.478512  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:15.480834  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:15.862041  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:15.905473  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.194305674s)
	W1025 09:34:15.905512  294773 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:34:15.905531  294773 retry.go:31] will retry after 7.709249366s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:34:15.925647  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:15.926015  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:15.980480  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:16.363141  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:16.424789  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:16.426141  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:16.480099  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:16.862628  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:16.925725  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:16.926094  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:16.981610  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:17.363362  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:17.463882  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:17.464055  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:17.479598  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:17.865049  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:17.927335  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:17.928805  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:17.979624  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:18.364135  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:18.425334  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:18.425677  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:18.480208  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:18.861224  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:18.926622  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:18.926703  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:18.981905  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:19.376323  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:19.480623  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:19.481054  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:19.493027  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:19.862333  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:19.923694  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:19.923817  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:19.981505  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:20.361714  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:20.424207  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:20.424352  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:20.479583  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:20.862060  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:20.923693  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:20.923882  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:20.980132  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:21.362457  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:21.464079  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:21.464435  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:21.480177  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:21.862351  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:21.925137  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:21.925278  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:21.979280  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:22.361629  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:22.425121  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:22.425505  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:22.480254  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:22.862040  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:22.924079  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:22.924236  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:22.980105  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:23.361115  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:23.424926  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:23.426141  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:23.480731  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:23.614974  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:34:23.861190  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:23.924809  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:23.928331  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:23.980094  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:24.361751  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:24.424779  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:24.424934  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:24.480331  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:24.676504  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.061487932s)
	W1025 09:34:24.676541  294773 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:34:24.676559  294773 retry.go:31] will retry after 16.34380046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:34:24.861592  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:24.924539  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:24.924794  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:24.979894  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:25.364115  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:25.463115  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:25.463759  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:25.479762  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:25.863865  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:25.924799  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:25.925165  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:25.980114  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:26.362783  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:26.425685  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:26.425824  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:26.480061  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:26.865437  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:26.923628  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:26.923725  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:26.980008  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:27.365261  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:27.425206  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:27.425620  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:27.481363  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:27.861695  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:27.924858  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:27.924995  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:27.980270  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:28.363264  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:28.423940  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:28.424133  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:28.480395  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:28.862476  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:28.924762  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:28.925724  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:28.980732  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:29.374916  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:29.474286  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:29.474684  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:29.480278  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:29.862032  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:29.924261  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:29.924406  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:29.979595  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:30.362934  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:30.424435  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:30.424578  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:30.479918  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:30.862091  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:30.924291  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:30.925183  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:30.979473  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:31.361964  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:31.423176  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:31.424301  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:31.479357  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:31.862507  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:31.935565  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:31.940938  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:31.980389  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:32.361574  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:32.424855  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:32.425236  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:32.482046  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:32.861520  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:32.924304  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:32.924827  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:32.981692  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:33.361491  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:33.424065  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:33.424348  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:33.479448  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:33.902703  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:33.938977  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:33.939513  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:33.992803  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:34.363314  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:34.425008  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:34.425314  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:34.479947  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:34.863944  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:34.924495  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:34.924742  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:34.979524  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:35.362854  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:35.432510  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:35.432759  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:35.480241  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:35.862303  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:35.924360  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:35.924435  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:35.980115  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:36.361781  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:36.426261  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:36.427296  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:36.480690  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:36.863229  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:36.925940  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:36.926449  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:36.980848  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:37.362227  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:37.426204  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:37.426782  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:37.480623  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:37.862126  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:37.922811  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:37.923363  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:37.979341  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:38.361489  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:38.425942  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:38.426477  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:38.479547  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:38.862012  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:38.924533  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:38.924562  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:38.979768  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:39.362066  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:39.424768  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:39.425277  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:39.481790  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:39.862175  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:39.924718  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:39.925146  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:39.979863  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:40.363510  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:40.425260  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:40.425645  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:40.479953  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:40.862118  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:40.923957  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:40.924566  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:40.980204  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:41.021556  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:34:41.362036  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:41.425215  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:41.425518  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:41.480640  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:41.862806  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:41.924157  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:41.925608  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:41.979592  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:42.103049  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.081402045s)
	W1025 09:34:42.103093  294773 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:34:42.103196  294773 retry.go:31] will retry after 25.861703469s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1025 09:34:42.361277  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:42.425325  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:42.425740  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:42.480476  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:42.861604  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:42.925265  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:42.925673  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:42.980186  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:43.361418  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:43.424837  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:43.424879  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:43.481102  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:43.863081  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:43.926863  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:43.927427  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:43.980602  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:44.362752  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:44.425887  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:44.426299  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:44.482994  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:44.861667  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:44.924400  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:44.924604  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:44.980519  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:45.388967  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:45.425450  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:45.425983  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:45.490642  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:45.863897  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:45.924896  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:45.925384  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:45.982387  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:46.361420  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:46.425453  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:46.425872  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:46.480915  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:46.863673  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:46.924595  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:46.925496  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:46.979975  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:47.362247  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:47.425032  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:47.425414  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:47.479728  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:47.862683  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:47.964004  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:47.964430  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:47.979481  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:48.361809  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:48.423574  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:48.424045  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:48.480994  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:48.862182  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:48.925018  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:48.925476  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:48.980844  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:49.362015  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:49.424733  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:49.425642  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:49.479639  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:49.864070  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:49.924327  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:49.925197  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:49.979519  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:50.361392  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:50.426760  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:50.427271  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:50.480482  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:50.861686  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:50.925272  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:50.926473  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:50.980136  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:51.361787  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:51.426870  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:51.427430  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:51.480167  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:51.861528  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:51.924341  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:51.924730  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:51.980305  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:52.362570  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:52.425436  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:52.425753  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:52.480695  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:52.861527  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:52.924702  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:52.925946  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:52.980660  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:53.362546  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:53.424359  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:53.424541  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:53.480125  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:53.861888  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:53.924639  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:53.925058  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:53.979289  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:54.361204  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:54.426256  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:54.426464  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:54.479280  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:54.861464  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:54.924502  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:54.924649  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:54.979863  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:55.362674  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:55.426822  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:55.427454  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:55.480384  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:55.862370  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:55.925055  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:55.925662  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:55.980262  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:56.361221  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:56.424024  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:56.424202  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:56.479373  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:56.861028  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:56.923494  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:56.924107  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:56.979233  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:57.361711  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:57.424780  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:57.425980  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:57.480655  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:57.862595  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:57.925182  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:57.925679  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:57.979945  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:58.371888  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:58.427055  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:58.427268  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:58.479578  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:58.862042  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:58.925987  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:58.926348  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:58.980310  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:59.361554  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:59.424273  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:59.424728  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:59.480325  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:34:59.861631  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:34:59.926939  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:34:59.927298  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:34:59.979902  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:00.371404  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:00.426511  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:00.427247  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:35:00.479729  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:00.862526  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:00.924820  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:35:00.925039  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:00.980309  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:01.362292  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:01.424071  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:01.424243  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:35:01.480553  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:01.862392  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:01.924785  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:01.925241  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:35:01.979436  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:02.362842  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:02.426166  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:02.426545  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:35:02.480571  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:02.861734  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:02.925729  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:35:02.926152  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:02.980627  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:03.362470  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:03.424047  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:35:03.424103  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:03.480142  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:03.862222  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:03.923507  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1025 09:35:03.924104  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:03.980518  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:04.362810  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:04.426157  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:04.426620  294773 kapi.go:107] duration metric: took 1m26.506683422s to wait for kubernetes.io/minikube-addons=registry ...
	I1025 09:35:04.480748  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:04.862879  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:04.924682  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:04.987618  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:05.362433  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:05.423816  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:05.480890  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:05.861938  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:05.932295  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:05.979470  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:06.361698  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:06.424157  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:06.480343  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:06.861747  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:06.924108  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:06.979129  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:07.362434  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:07.423509  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:07.480040  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:07.862314  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:07.923596  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:07.965888  294773 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1025 09:35:07.979614  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:08.362281  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:08.423803  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:08.480130  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:08.862124  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:08.923681  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:08.986962  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:09.114950  294773 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.149020723s)
	W1025 09:35:09.115032  294773 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1025 09:35:09.115187  294773 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1025 09:35:09.361466  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:09.424225  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:09.479567  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:09.861753  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:09.923566  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:09.979686  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:10.362245  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:10.423858  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:10.479988  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:10.861708  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:10.924074  294773 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1025 09:35:10.980035  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:11.362173  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:11.423404  294773 kapi.go:107] duration metric: took 1m33.503473644s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1025 09:35:11.479599  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:11.862174  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:11.980067  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:12.362348  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:12.479873  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:12.861930  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:12.980568  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:13.362497  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:13.491357  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:13.861803  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1025 09:35:13.980958  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:14.361589  294773 kapi.go:107] duration metric: took 1m32.503280429s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1025 09:35:14.371456  294773 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-523976 cluster.
	I1025 09:35:14.378079  294773 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1025 09:35:14.385401  294773 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1025 09:35:14.482312  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:14.979587  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:15.480048  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:15.979254  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:16.480395  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:16.979939  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:17.479989  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:17.979420  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:18.481495  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:18.983619  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:19.480226  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:19.984762  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:20.480651  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:20.979687  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:21.481520  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:21.980572  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:22.479358  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:22.980854  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:23.479088  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:23.979864  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:24.479808  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:24.979582  294773 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1025 09:35:25.479415  294773 kapi.go:107] duration metric: took 1m47.003378818s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1025 09:35:25.482606  294773 out.go:179] * Enabled addons: amd-gpu-device-plugin, storage-provisioner, registry-creds, nvidia-device-plugin, cloud-spanner, ingress-dns, metrics-server, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1025 09:35:25.485477  294773 addons.go:514] duration metric: took 1m53.43086173s for enable addons: enabled=[amd-gpu-device-plugin storage-provisioner registry-creds nvidia-device-plugin cloud-spanner ingress-dns metrics-server yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1025 09:35:25.485543  294773 start.go:246] waiting for cluster config update ...
	I1025 09:35:25.485565  294773 start.go:255] writing updated cluster config ...
	I1025 09:35:25.485911  294773 ssh_runner.go:195] Run: rm -f paused
	I1025 09:35:25.489867  294773 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:35:25.494352  294773 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7ztdw" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:25.499056  294773 pod_ready.go:94] pod "coredns-66bc5c9577-7ztdw" is "Ready"
	I1025 09:35:25.499091  294773 pod_ready.go:86] duration metric: took 4.708494ms for pod "coredns-66bc5c9577-7ztdw" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:25.501489  294773 pod_ready.go:83] waiting for pod "etcd-addons-523976" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:25.506234  294773 pod_ready.go:94] pod "etcd-addons-523976" is "Ready"
	I1025 09:35:25.506262  294773 pod_ready.go:86] duration metric: took 4.748495ms for pod "etcd-addons-523976" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:25.508809  294773 pod_ready.go:83] waiting for pod "kube-apiserver-addons-523976" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:25.513835  294773 pod_ready.go:94] pod "kube-apiserver-addons-523976" is "Ready"
	I1025 09:35:25.513868  294773 pod_ready.go:86] duration metric: took 5.032003ms for pod "kube-apiserver-addons-523976" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:25.516347  294773 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-523976" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:25.894846  294773 pod_ready.go:94] pod "kube-controller-manager-addons-523976" is "Ready"
	I1025 09:35:25.894874  294773 pod_ready.go:86] duration metric: took 378.498321ms for pod "kube-controller-manager-addons-523976" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:26.095428  294773 pod_ready.go:83] waiting for pod "kube-proxy-sfnch" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:26.494701  294773 pod_ready.go:94] pod "kube-proxy-sfnch" is "Ready"
	I1025 09:35:26.494796  294773 pod_ready.go:86] duration metric: took 399.341039ms for pod "kube-proxy-sfnch" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:26.695130  294773 pod_ready.go:83] waiting for pod "kube-scheduler-addons-523976" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:27.094436  294773 pod_ready.go:94] pod "kube-scheduler-addons-523976" is "Ready"
	I1025 09:35:27.094465  294773 pod_ready.go:86] duration metric: took 399.277956ms for pod "kube-scheduler-addons-523976" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:35:27.094479  294773 pod_ready.go:40] duration metric: took 1.604579657s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:35:27.410606  294773 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 09:35:27.415673  294773 out.go:179] * Done! kubectl is now configured to use "addons-523976" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 09:35:56 addons-523976 crio[831]: time="2025-10-25T09:35:56.408966222Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:35:56 addons-523976 crio[831]: time="2025-10-25T09:35:56.409465084Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:35:56 addons-523976 crio[831]: time="2025-10-25T09:35:56.427570319Z" level=info msg="Created container c1808ccca1318df52acb45416f34b4b1915fedc1e2b4e9441055e2bf8a172638: default/test-local-path/busybox" id=e4ff14ec-530d-49ba-9ba3-245c09a7bf44 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:35:56 addons-523976 crio[831]: time="2025-10-25T09:35:56.428591872Z" level=info msg="Starting container: c1808ccca1318df52acb45416f34b4b1915fedc1e2b4e9441055e2bf8a172638" id=9fbb76a2-9747-48c3-a5a4-719a3da65fce name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:35:56 addons-523976 crio[831]: time="2025-10-25T09:35:56.430189833Z" level=info msg="Started container" PID=5382 containerID=c1808ccca1318df52acb45416f34b4b1915fedc1e2b4e9441055e2bf8a172638 description=default/test-local-path/busybox id=9fbb76a2-9747-48c3-a5a4-719a3da65fce name=/runtime.v1.RuntimeService/StartContainer sandboxID=772583e3b7fd0ecceba60f2149c80d345dcfd7d945aceda2fe8b4017055358f1
	Oct 25 09:35:58 addons-523976 crio[831]: time="2025-10-25T09:35:58.266135603Z" level=info msg="Stopping pod sandbox: 772583e3b7fd0ecceba60f2149c80d345dcfd7d945aceda2fe8b4017055358f1" id=fb765340-45ea-4a89-a9d5-c0ab04cba360 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 09:35:58 addons-523976 crio[831]: time="2025-10-25T09:35:58.266470213Z" level=info msg="Got pod network &{Name:test-local-path Namespace:default ID:772583e3b7fd0ecceba60f2149c80d345dcfd7d945aceda2fe8b4017055358f1 UID:bd6315a3-2fe0-4869-9a97-1a8c91efdc03 NetNS:/var/run/netns/8cc7723e-9bea-498c-8db7-081228e28678 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000a92698}] Aliases:map[]}"
	Oct 25 09:35:58 addons-523976 crio[831]: time="2025-10-25T09:35:58.266624143Z" level=info msg="Deleting pod default_test-local-path from CNI network \"kindnet\" (type=ptp)"
	Oct 25 09:35:58 addons-523976 crio[831]: time="2025-10-25T09:35:58.297660899Z" level=info msg="Stopped pod sandbox: 772583e3b7fd0ecceba60f2149c80d345dcfd7d945aceda2fe8b4017055358f1" id=fb765340-45ea-4a89-a9d5-c0ab04cba360 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 09:35:59 addons-523976 crio[831]: time="2025-10-25T09:35:59.510629516Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-delete-pvc-e0338399-28dc-478f-89a3-735d9bdcfa58/POD" id=c6230a85-4bfd-46e1-a414-6aba4bd33fe2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:35:59 addons-523976 crio[831]: time="2025-10-25T09:35:59.510694862Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:35:59 addons-523976 crio[831]: time="2025-10-25T09:35:59.518818015Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-e0338399-28dc-478f-89a3-735d9bdcfa58 Namespace:local-path-storage ID:bd419c5ee4885b1c8065b58acb30de81d6b7b2be355c0224a44f72feca7ef2fc UID:921d7198-3ce7-46ef-a791-561ba07fb455 NetNS:/var/run/netns/5670f37d-2670-4b6b-a6ee-a4626768ceea Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000a92c40}] Aliases:map[]}"
	Oct 25 09:35:59 addons-523976 crio[831]: time="2025-10-25T09:35:59.518860223Z" level=info msg="Adding pod local-path-storage_helper-pod-delete-pvc-e0338399-28dc-478f-89a3-735d9bdcfa58 to CNI network \"kindnet\" (type=ptp)"
	Oct 25 09:35:59 addons-523976 crio[831]: time="2025-10-25T09:35:59.544104253Z" level=info msg="Got pod network &{Name:helper-pod-delete-pvc-e0338399-28dc-478f-89a3-735d9bdcfa58 Namespace:local-path-storage ID:bd419c5ee4885b1c8065b58acb30de81d6b7b2be355c0224a44f72feca7ef2fc UID:921d7198-3ce7-46ef-a791-561ba07fb455 NetNS:/var/run/netns/5670f37d-2670-4b6b-a6ee-a4626768ceea Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000a92c40}] Aliases:map[]}"
	Oct 25 09:35:59 addons-523976 crio[831]: time="2025-10-25T09:35:59.544453501Z" level=info msg="Checking pod local-path-storage_helper-pod-delete-pvc-e0338399-28dc-478f-89a3-735d9bdcfa58 for CNI network kindnet (type=ptp)"
	Oct 25 09:35:59 addons-523976 crio[831]: time="2025-10-25T09:35:59.556019572Z" level=info msg="Ran pod sandbox bd419c5ee4885b1c8065b58acb30de81d6b7b2be355c0224a44f72feca7ef2fc with infra container: local-path-storage/helper-pod-delete-pvc-e0338399-28dc-478f-89a3-735d9bdcfa58/POD" id=c6230a85-4bfd-46e1-a414-6aba4bd33fe2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:35:59 addons-523976 crio[831]: time="2025-10-25T09:35:59.557463948Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=02add094-14de-455f-bc43-3489f012cb7c name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:35:59 addons-523976 crio[831]: time="2025-10-25T09:35:59.559635046Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=66e912cd-632c-41d1-9750-a80c9f6a0c1a name=/runtime.v1.ImageService/ImageStatus
	Oct 25 09:35:59 addons-523976 crio[831]: time="2025-10-25T09:35:59.572207021Z" level=info msg="Creating container: local-path-storage/helper-pod-delete-pvc-e0338399-28dc-478f-89a3-735d9bdcfa58/helper-pod" id=65354bb5-08a0-48f8-a96f-851c1f7bbdec name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:35:59 addons-523976 crio[831]: time="2025-10-25T09:35:59.572424608Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:35:59 addons-523976 crio[831]: time="2025-10-25T09:35:59.598681205Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:35:59 addons-523976 crio[831]: time="2025-10-25T09:35:59.608376827Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 09:35:59 addons-523976 crio[831]: time="2025-10-25T09:35:59.633800051Z" level=info msg="Created container d184f6812f342b123f51ec0c1708ca1593fe94cb2cdf16021fe31bc6beaae6c6: local-path-storage/helper-pod-delete-pvc-e0338399-28dc-478f-89a3-735d9bdcfa58/helper-pod" id=65354bb5-08a0-48f8-a96f-851c1f7bbdec name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 09:35:59 addons-523976 crio[831]: time="2025-10-25T09:35:59.639119056Z" level=info msg="Starting container: d184f6812f342b123f51ec0c1708ca1593fe94cb2cdf16021fe31bc6beaae6c6" id=9984210c-ef62-434b-928b-455a7945dcb0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 09:35:59 addons-523976 crio[831]: time="2025-10-25T09:35:59.64598876Z" level=info msg="Started container" PID=5496 containerID=d184f6812f342b123f51ec0c1708ca1593fe94cb2cdf16021fe31bc6beaae6c6 description=local-path-storage/helper-pod-delete-pvc-e0338399-28dc-478f-89a3-735d9bdcfa58/helper-pod id=9984210c-ef62-434b-928b-455a7945dcb0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bd419c5ee4885b1c8065b58acb30de81d6b7b2be355c0224a44f72feca7ef2fc
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	d184f6812f342       fc9db2894f4e4b8c296b8c9dab7e18a6e78de700d21bc0cfaf5c78484226db9c                                                                             1 second ago         Exited              helper-pod                               0                   bd419c5ee4885       helper-pod-delete-pvc-e0338399-28dc-478f-89a3-735d9bdcfa58   local-path-storage
	c1808ccca1318       docker.io/library/busybox@sha256:aefc3a378c4cf11a6d85071438d3bf7634633a34c6a68d4c5f928516d556c366                                            4 seconds ago        Exited              busybox                                  0                   772583e3b7fd0       test-local-path                                              default
	1e38716319b45       docker.io/library/busybox@sha256:1fa89c01cd0473cedbd1a470abb8c139eeb80920edf1bc55de87851bfb63ea11                                            8 seconds ago        Exited              helper-pod                               0                   58f15fbff44b6       helper-pod-create-pvc-e0338399-28dc-478f-89a3-735d9bdcfa58   local-path-storage
	aea27855a9832       gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9                                          10 seconds ago       Exited              registry-test                            0                   a2a6415d060db       registry-test                                                default
	95759d5756a3e       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          30 seconds ago       Running             busybox                                  0                   23a17b309db44       busybox                                                      default
	ed2b16c18354f       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          37 seconds ago       Running             csi-snapshotter                          0                   18d047a3ac6db       csi-hostpathplugin-jzdxn                                     kube-system
	5aa333b5df5f3       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          38 seconds ago       Running             csi-provisioner                          0                   18d047a3ac6db       csi-hostpathplugin-jzdxn                                     kube-system
	7fa48c16691b3       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            40 seconds ago       Running             liveness-probe                           0                   18d047a3ac6db       csi-hostpathplugin-jzdxn                                     kube-system
	dba2f43dd64c7       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           41 seconds ago       Running             hostpath                                 0                   18d047a3ac6db       csi-hostpathplugin-jzdxn                                     kube-system
	c0b6483dcceab       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            43 seconds ago       Running             gadget                                   0                   ff76404a183b7       gadget-47j62                                                 gadget
	a28457a8a0b99       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                46 seconds ago       Running             node-driver-registrar                    0                   18d047a3ac6db       csi-hostpathplugin-jzdxn                                     kube-system
	a85b223a43799       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 47 seconds ago       Running             gcp-auth                                 0                   ade14d6ee2368       gcp-auth-78565c9fb4-sv7g4                                    gcp-auth
	7526138c74ffc       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             51 seconds ago       Running             controller                               0                   c63d3cbd08d49       ingress-nginx-controller-675c5ddd98-bs2mg                    ingress-nginx
	f6953e828b170       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   57 seconds ago       Exited              patch                                    0                   9475cc6c28e4e       ingress-nginx-admission-patch-gd8wq                          ingress-nginx
	50e4b1142cbe6       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           58 seconds ago       Running             registry                                 0                   3b75b6242f09a       registry-6b586f9694-zbqtr                                    kube-system
	39f4662ae9889       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     59 seconds ago       Running             nvidia-device-plugin-ctr                 0                   a0ad6b08471b2       nvidia-device-plugin-daemonset-bc95g                         kube-system
	1b9e3466e10d1       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              About a minute ago   Running             registry-proxy                           0                   ae7d2b4221b95       registry-proxy-2kb6l                                         kube-system
	0fb942eed357a       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   306b86391f445       kube-ingress-dns-minikube                                    kube-system
	cf39d704be2f7       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   18d047a3ac6db       csi-hostpathplugin-jzdxn                                     kube-system
	52ad74c56c561       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   4b01ebe945ad7       yakd-dashboard-5ff678cb9-4p7gh                               yakd-dashboard
	8ffb283dc89c3       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   87188b0223099       local-path-provisioner-648f6765c9-b49pk                      local-path-storage
	76b94cc8a56cd       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   3474cd324087b       csi-hostpath-resizer-0                                       kube-system
	bd4a3acf65df2       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   f35f7e7bf2689       csi-hostpath-attacher-0                                      kube-system
	c2ce713b653ba       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               About a minute ago   Running             cloud-spanner-emulator                   0                   01fc6df7d665b       cloud-spanner-emulator-86bd5cbb97-4j9jb                      default
	3552646870f03       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   f33fe217f6ca6       snapshot-controller-7d9fbc56b8-ml7zh                         kube-system
	1671d001e906d       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   702ab1aafa6bc       snapshot-controller-7d9fbc56b8-225jc                         kube-system
	526fdc9a670ac       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   8f72727748e62       metrics-server-85b7d694d7-rvf2w                              kube-system
	dc240a0f2902a       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             About a minute ago   Exited              create                                   0                   fd6d4c4db03c8       ingress-nginx-admission-create-z2xx7                         ingress-nginx
	baef1ffd7c044       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   00299902c1d1b       storage-provisioner                                          kube-system
	08942e9bb2ed5       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   09fc4f35a43a0       coredns-66bc5c9577-7ztdw                                     kube-system
	1522372d280f9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   1fa4251ee3f13       kindnet-x2lt6                                                kube-system
	a34c99943c936       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   977202e858a1e       kube-proxy-sfnch                                             kube-system
	dc95e59147e2e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   c49ae75a87042       kube-scheduler-addons-523976                                 kube-system
	3f9ccf0f1d26a       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   a5b7f5746f60e       etcd-addons-523976                                           kube-system
	20bbcad0ad16d       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   b991f7b33446f       kube-apiserver-addons-523976                                 kube-system
	78f4542e06d9b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   e0f61e42ee1d4       kube-controller-manager-addons-523976                        kube-system
	
	
	==> coredns [08942e9bb2ed5fa71628fa7fb77f43e33bb0c653db62df711678b474d20c96ec] <==
	[INFO] 10.244.0.14:35614 - 50306 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001612707s
	[INFO] 10.244.0.14:35614 - 14746 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000095025s
	[INFO] 10.244.0.14:35614 - 8531 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000076349s
	[INFO] 10.244.0.14:52949 - 31067 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000145151s
	[INFO] 10.244.0.14:52949 - 31288 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000093146s
	[INFO] 10.244.0.14:35691 - 24587 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000079172s
	[INFO] 10.244.0.14:35691 - 24374 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000180122s
	[INFO] 10.244.0.14:43133 - 9999 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000080977s
	[INFO] 10.244.0.14:43133 - 9802 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000059349s
	[INFO] 10.244.0.14:48822 - 53661 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001635846s
	[INFO] 10.244.0.14:48822 - 53842 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001569129s
	[INFO] 10.244.0.14:45126 - 40056 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000103952s
	[INFO] 10.244.0.14:45126 - 39917 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000099325s
	[INFO] 10.244.0.20:57576 - 62403 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000182632s
	[INFO] 10.244.0.20:55826 - 48568 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000087755s
	[INFO] 10.244.0.20:34350 - 37491 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000190461s
	[INFO] 10.244.0.20:45288 - 45649 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00008275s
	[INFO] 10.244.0.20:34389 - 9784 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000139613s
	[INFO] 10.244.0.20:43730 - 26631 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000064517s
	[INFO] 10.244.0.20:57026 - 20742 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001876506s
	[INFO] 10.244.0.20:60046 - 5008 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00225626s
	[INFO] 10.244.0.20:45951 - 53029 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001463134s
	[INFO] 10.244.0.20:34130 - 57338 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001063293s
	[INFO] 10.244.0.23:59021 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000190157s
	[INFO] 10.244.0.23:47287 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000138004s
	
	
	==> describe nodes <==
	Name:               addons-523976
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-523976
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=addons-523976
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_33_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-523976
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-523976"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:33:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-523976
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:35:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:35:39 +0000   Sat, 25 Oct 2025 09:33:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:35:39 +0000   Sat, 25 Oct 2025 09:33:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:35:39 +0000   Sat, 25 Oct 2025 09:33:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:35:39 +0000   Sat, 25 Oct 2025 09:34:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-523976
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                0ef687d5-da5c-4f15-a993-7ab4a5927695
	  Boot ID:                    3729fb0b-b441-4078-a4f7-ae0fb40e9fa4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  default                     cloud-spanner-emulator-86bd5cbb97-4j9jb      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  gadget                      gadget-47j62                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  gcp-auth                    gcp-auth-78565c9fb4-sv7g4                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-bs2mg    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m24s
	  kube-system                 coredns-66bc5c9577-7ztdw                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m30s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 csi-hostpathplugin-jzdxn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 etcd-addons-523976                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m35s
	  kube-system                 kindnet-x2lt6                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m30s
	  kube-system                 kube-apiserver-addons-523976                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 kube-controller-manager-addons-523976        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-proxy-sfnch                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-scheduler-addons-523976                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 metrics-server-85b7d694d7-rvf2w              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m25s
	  kube-system                 nvidia-device-plugin-daemonset-bc95g         0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 registry-6b586f9694-zbqtr                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 registry-creds-764b6fb674-8qvgv              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 registry-proxy-2kb6l                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 snapshot-controller-7d9fbc56b8-225jc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 snapshot-controller-7d9fbc56b8-ml7zh         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  local-path-storage          local-path-provisioner-648f6765c9-b49pk      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-4p7gh               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 2m29s  kube-proxy       
	  Normal   Starting                 2m35s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m35s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m35s  kubelet          Node addons-523976 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m35s  kubelet          Node addons-523976 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m35s  kubelet          Node addons-523976 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m31s  node-controller  Node addons-523976 event: Registered Node addons-523976 in Controller
	  Normal   NodeReady                109s   kubelet          Node addons-523976 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct25 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015587] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503041] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.036759] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.769713] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.474162] kauditd_printk_skb: 36 callbacks suppressed
	[Oct25 08:29] hrtimer: interrupt took 30248914 ns
	[Oct25 09:08] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Oct25 09:31] kauditd_printk_skb: 8 callbacks suppressed
	[Oct25 09:33] overlayfs: idmapped layers are currently not supported
	[  +0.069522] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [3f9ccf0f1d26a6dd05ec9d553991a3c6f9f4da2ba3138959689b1a91c99e7666] <==
	{"level":"warn","ts":"2025-10-25T09:33:22.242664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:22.257219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:22.273790Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:22.310720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:22.335185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:22.357607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:22.361964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:22.378553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:22.403347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:22.418249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:22.432370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:22.472467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:22.473137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:22.484918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:22.507286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:22.539978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:22.568266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:22.582277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:22.696115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:38.912296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:33:38.963546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:00.469271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:00.485281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:00.544627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:34:00.564739Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39164","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [a85b223a43799148b940ebf69ec28f62644781722898fd7f8089aba4eb872729] <==
	2025/10/25 09:35:13 GCP Auth Webhook started!
	2025/10/25 09:35:28 Ready to marshal response ...
	2025/10/25 09:35:28 Ready to write response ...
	2025/10/25 09:35:28 Ready to marshal response ...
	2025/10/25 09:35:28 Ready to write response ...
	2025/10/25 09:35:28 Ready to marshal response ...
	2025/10/25 09:35:28 Ready to write response ...
	2025/10/25 09:35:48 Ready to marshal response ...
	2025/10/25 09:35:48 Ready to write response ...
	2025/10/25 09:35:51 Ready to marshal response ...
	2025/10/25 09:35:51 Ready to write response ...
	2025/10/25 09:35:51 Ready to marshal response ...
	2025/10/25 09:35:51 Ready to write response ...
	2025/10/25 09:35:59 Ready to marshal response ...
	2025/10/25 09:35:59 Ready to write response ...
	
	
	==> kernel <==
	 09:36:01 up  1:18,  0 user,  load average: 2.32, 3.06, 3.40
	Linux addons-523976 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1522372d280f9cea2d797057a2ac964089f52ed80759c765101c5ae665e9beaa] <==
	I1025 09:34:03.593132       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:34:03.593186       1 metrics.go:72] Registering metrics
	I1025 09:34:03.593320       1 controller.go:711] "Syncing nftables rules"
	I1025 09:34:11.991657       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:34:11.991760       1 main.go:301] handling current node
	I1025 09:34:21.991968       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:34:21.992017       1 main.go:301] handling current node
	I1025 09:34:31.991236       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:34:31.991317       1 main.go:301] handling current node
	I1025 09:34:41.997688       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:34:41.997722       1 main.go:301] handling current node
	I1025 09:34:51.991370       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:34:51.991401       1 main.go:301] handling current node
	I1025 09:35:01.992127       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:35:01.992166       1 main.go:301] handling current node
	I1025 09:35:11.991038       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:35:11.991067       1 main.go:301] handling current node
	I1025 09:35:21.991252       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:35:21.991350       1 main.go:301] handling current node
	I1025 09:35:31.991044       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:35:31.991081       1 main.go:301] handling current node
	I1025 09:35:41.992277       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:35:41.992413       1 main.go:301] handling current node
	I1025 09:35:51.991973       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:35:51.992026       1 main.go:301] handling current node
	
	
	==> kube-apiserver [20bbcad0ad16d6b2ae5b3ca2e2ab5ba1053638cd0640b1fb4d021b25a50f8234] <==
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1025 09:34:28.737254       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.193.112:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.193.112:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.193.112:443: connect: connection refused" logger="UnhandledError"
	E1025 09:34:28.743292       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.193.112:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.193.112:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.193.112:443: connect: connection refused" logger="UnhandledError"
	W1025 09:34:29.737818       1 handler_proxy.go:99] no RequestInfo found in the context
	E1025 09:34:29.737860       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1025 09:34:29.737875       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1025 09:34:29.737949       1 handler_proxy.go:99] no RequestInfo found in the context
	E1025 09:34:29.738025       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1025 09:34:29.739127       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1025 09:34:33.773337       1 handler_proxy.go:99] no RequestInfo found in the context
	E1025 09:34:33.773391       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1025 09:34:33.775060       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.193.112:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.193.112:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	I1025 09:34:33.823294       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1025 09:34:33.878118       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E1025 09:35:37.813678       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:40482: use of closed network connection
	E1025 09:35:38.055921       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:40512: use of closed network connection
	E1025 09:35:38.188410       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:40526: use of closed network connection
	
	
	==> kube-controller-manager [78f4542e06d9b5434e621ac06a3ce1eec88a9d17fee72f530244bb01d81a42bb] <==
	I1025 09:33:30.492759       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 09:33:30.493337       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 09:33:30.493468       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 09:33:30.493526       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 09:33:30.493767       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 09:33:30.494339       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 09:33:30.494631       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 09:33:30.495071       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 09:33:30.495126       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 09:33:30.495195       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 09:33:30.495216       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 09:33:30.495455       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 09:33:30.510952       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:33:30.522793       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1025 09:33:36.937000       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1025 09:34:00.461065       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1025 09:34:00.461228       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1025 09:34:00.461288       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1025 09:34:00.531559       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1025 09:34:00.537064       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1025 09:34:00.562705       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:34:00.637270       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:34:15.470661       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1025 09:34:30.567804       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1025 09:34:30.652935       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [a34c99943c9360cd74f2410f3e1dc4a77dcfde99ee71da412b0ee5916e892d1b] <==
	I1025 09:33:31.777845       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:33:31.877718       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:33:31.978318       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:33:31.978357       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1025 09:33:31.978419       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:33:32.014117       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:33:32.014182       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:33:32.018779       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:33:32.019104       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:33:32.019129       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:33:32.020569       1 config.go:200] "Starting service config controller"
	I1025 09:33:32.020595       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:33:32.020615       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:33:32.020620       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:33:32.020654       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:33:32.020664       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:33:32.021301       1 config.go:309] "Starting node config controller"
	I1025 09:33:32.021320       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:33:32.021326       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:33:32.120826       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:33:32.120826       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:33:32.120854       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [dc95e59147e2eec17d430ab406fa8f6b59a62de8f50462f7a609116945f95fd1] <==
	I1025 09:33:24.064333       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:33:24.068979       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:33:24.069138       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:33:24.069165       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:33:24.069183       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1025 09:33:24.079348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1025 09:33:24.079706       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 09:33:24.079759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 09:33:24.079805       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 09:33:24.079850       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 09:33:24.079894       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 09:33:24.079933       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 09:33:24.079975       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 09:33:24.080013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 09:33:24.080057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 09:33:24.080098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 09:33:24.080147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 09:33:24.080193       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 09:33:24.080236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 09:33:24.080277       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 09:33:24.080328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 09:33:24.080457       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 09:33:24.080503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 09:33:24.080565       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1025 09:33:25.269516       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 09:35:58 addons-523976 kubelet[1282]: I1025 09:35:58.450222    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd6315a3-2fe0-4869-9a97-1a8c91efdc03-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "bd6315a3-2fe0-4869-9a97-1a8c91efdc03" (UID: "bd6315a3-2fe0-4869-9a97-1a8c91efdc03"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 25 09:35:58 addons-523976 kubelet[1282]: I1025 09:35:58.456288    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd6315a3-2fe0-4869-9a97-1a8c91efdc03-kube-api-access-dzsgw" (OuterVolumeSpecName: "kube-api-access-dzsgw") pod "bd6315a3-2fe0-4869-9a97-1a8c91efdc03" (UID: "bd6315a3-2fe0-4869-9a97-1a8c91efdc03"). InnerVolumeSpecName "kube-api-access-dzsgw". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 25 09:35:58 addons-523976 kubelet[1282]: I1025 09:35:58.550992    1282 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/bd6315a3-2fe0-4869-9a97-1a8c91efdc03-gcp-creds\") on node \"addons-523976\" DevicePath \"\""
	Oct 25 09:35:58 addons-523976 kubelet[1282]: I1025 09:35:58.551039    1282 reconciler_common.go:299] "Volume detached for volume \"pvc-e0338399-28dc-478f-89a3-735d9bdcfa58\" (UniqueName: \"kubernetes.io/host-path/bd6315a3-2fe0-4869-9a97-1a8c91efdc03-pvc-e0338399-28dc-478f-89a3-735d9bdcfa58\") on node \"addons-523976\" DevicePath \"\""
	Oct 25 09:35:58 addons-523976 kubelet[1282]: I1025 09:35:58.551065    1282 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dzsgw\" (UniqueName: \"kubernetes.io/projected/bd6315a3-2fe0-4869-9a97-1a8c91efdc03-kube-api-access-dzsgw\") on node \"addons-523976\" DevicePath \"\""
	Oct 25 09:35:59 addons-523976 kubelet[1282]: I1025 09:35:59.272070    1282 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="772583e3b7fd0ecceba60f2149c80d345dcfd7d945aceda2fe8b4017055358f1"
	Oct 25 09:35:59 addons-523976 kubelet[1282]: E1025 09:35:59.274009    1282 status_manager.go:1018] "Failed to get status for pod" err="pods \"test-local-path\" is forbidden: User \"system:node:addons-523976\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\": no relationship found between node 'addons-523976' and this object" podUID="bd6315a3-2fe0-4869-9a97-1a8c91efdc03" pod="default/test-local-path"
	Oct 25 09:35:59 addons-523976 kubelet[1282]: I1025 09:35:59.360916    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/921d7198-3ce7-46ef-a791-561ba07fb455-data\") pod \"helper-pod-delete-pvc-e0338399-28dc-478f-89a3-735d9bdcfa58\" (UID: \"921d7198-3ce7-46ef-a791-561ba07fb455\") " pod="local-path-storage/helper-pod-delete-pvc-e0338399-28dc-478f-89a3-735d9bdcfa58"
	Oct 25 09:35:59 addons-523976 kubelet[1282]: I1025 09:35:59.360986    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/921d7198-3ce7-46ef-a791-561ba07fb455-gcp-creds\") pod \"helper-pod-delete-pvc-e0338399-28dc-478f-89a3-735d9bdcfa58\" (UID: \"921d7198-3ce7-46ef-a791-561ba07fb455\") " pod="local-path-storage/helper-pod-delete-pvc-e0338399-28dc-478f-89a3-735d9bdcfa58"
	Oct 25 09:35:59 addons-523976 kubelet[1282]: I1025 09:35:59.361047    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/921d7198-3ce7-46ef-a791-561ba07fb455-script\") pod \"helper-pod-delete-pvc-e0338399-28dc-478f-89a3-735d9bdcfa58\" (UID: \"921d7198-3ce7-46ef-a791-561ba07fb455\") " pod="local-path-storage/helper-pod-delete-pvc-e0338399-28dc-478f-89a3-735d9bdcfa58"
	Oct 25 09:35:59 addons-523976 kubelet[1282]: I1025 09:35:59.361079    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq5bp\" (UniqueName: \"kubernetes.io/projected/921d7198-3ce7-46ef-a791-561ba07fb455-kube-api-access-bq5bp\") pod \"helper-pod-delete-pvc-e0338399-28dc-478f-89a3-735d9bdcfa58\" (UID: \"921d7198-3ce7-46ef-a791-561ba07fb455\") " pod="local-path-storage/helper-pod-delete-pvc-e0338399-28dc-478f-89a3-735d9bdcfa58"
	Oct 25 09:35:59 addons-523976 kubelet[1282]: W1025 09:35:59.555803    1282 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9fc15dbb1b0abf2c8c465291568d03010a5cd699b548062f138e8f27aea60bc1/crio-bd419c5ee4885b1c8065b58acb30de81d6b7b2be355c0224a44f72feca7ef2fc WatchSource:0}: Error finding container bd419c5ee4885b1c8065b58acb30de81d6b7b2be355c0224a44f72feca7ef2fc: Status 404 returned error can't find the container with id bd419c5ee4885b1c8065b58acb30de81d6b7b2be355c0224a44f72feca7ef2fc
	Oct 25 09:36:00 addons-523976 kubelet[1282]: I1025 09:36:00.275678    1282 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd6315a3-2fe0-4869-9a97-1a8c91efdc03" path="/var/lib/kubelet/pods/bd6315a3-2fe0-4869-9a97-1a8c91efdc03/volumes"
	Oct 25 09:36:01 addons-523976 kubelet[1282]: I1025 09:36:01.495717    1282 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/921d7198-3ce7-46ef-a791-561ba07fb455-gcp-creds\") pod \"921d7198-3ce7-46ef-a791-561ba07fb455\" (UID: \"921d7198-3ce7-46ef-a791-561ba07fb455\") "
	Oct 25 09:36:01 addons-523976 kubelet[1282]: I1025 09:36:01.495786    1282 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/921d7198-3ce7-46ef-a791-561ba07fb455-data\") pod \"921d7198-3ce7-46ef-a791-561ba07fb455\" (UID: \"921d7198-3ce7-46ef-a791-561ba07fb455\") "
	Oct 25 09:36:01 addons-523976 kubelet[1282]: I1025 09:36:01.495812    1282 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/921d7198-3ce7-46ef-a791-561ba07fb455-script\") pod \"921d7198-3ce7-46ef-a791-561ba07fb455\" (UID: \"921d7198-3ce7-46ef-a791-561ba07fb455\") "
	Oct 25 09:36:01 addons-523976 kubelet[1282]: I1025 09:36:01.495849    1282 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq5bp\" (UniqueName: \"kubernetes.io/projected/921d7198-3ce7-46ef-a791-561ba07fb455-kube-api-access-bq5bp\") pod \"921d7198-3ce7-46ef-a791-561ba07fb455\" (UID: \"921d7198-3ce7-46ef-a791-561ba07fb455\") "
	Oct 25 09:36:01 addons-523976 kubelet[1282]: I1025 09:36:01.496233    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/921d7198-3ce7-46ef-a791-561ba07fb455-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "921d7198-3ce7-46ef-a791-561ba07fb455" (UID: "921d7198-3ce7-46ef-a791-561ba07fb455"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 25 09:36:01 addons-523976 kubelet[1282]: I1025 09:36:01.496294    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/921d7198-3ce7-46ef-a791-561ba07fb455-data" (OuterVolumeSpecName: "data") pod "921d7198-3ce7-46ef-a791-561ba07fb455" (UID: "921d7198-3ce7-46ef-a791-561ba07fb455"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 25 09:36:01 addons-523976 kubelet[1282]: I1025 09:36:01.497000    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/921d7198-3ce7-46ef-a791-561ba07fb455-script" (OuterVolumeSpecName: "script") pod "921d7198-3ce7-46ef-a791-561ba07fb455" (UID: "921d7198-3ce7-46ef-a791-561ba07fb455"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Oct 25 09:36:01 addons-523976 kubelet[1282]: I1025 09:36:01.507540    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/921d7198-3ce7-46ef-a791-561ba07fb455-kube-api-access-bq5bp" (OuterVolumeSpecName: "kube-api-access-bq5bp") pod "921d7198-3ce7-46ef-a791-561ba07fb455" (UID: "921d7198-3ce7-46ef-a791-561ba07fb455"). InnerVolumeSpecName "kube-api-access-bq5bp". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 25 09:36:01 addons-523976 kubelet[1282]: I1025 09:36:01.596345    1282 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bq5bp\" (UniqueName: \"kubernetes.io/projected/921d7198-3ce7-46ef-a791-561ba07fb455-kube-api-access-bq5bp\") on node \"addons-523976\" DevicePath \"\""
	Oct 25 09:36:01 addons-523976 kubelet[1282]: I1025 09:36:01.596388    1282 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/921d7198-3ce7-46ef-a791-561ba07fb455-gcp-creds\") on node \"addons-523976\" DevicePath \"\""
	Oct 25 09:36:01 addons-523976 kubelet[1282]: I1025 09:36:01.596400    1282 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/921d7198-3ce7-46ef-a791-561ba07fb455-data\") on node \"addons-523976\" DevicePath \"\""
	Oct 25 09:36:01 addons-523976 kubelet[1282]: I1025 09:36:01.596426    1282 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/921d7198-3ce7-46ef-a791-561ba07fb455-script\") on node \"addons-523976\" DevicePath \"\""
	
	
	==> storage-provisioner [baef1ffd7c044365d9d03e5ce421585250bb6e1abf553ca3388e8ed456f0238b] <==
	W1025 09:35:37.853024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:39.856211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:39.862948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:41.866623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:41.871127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:43.874510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:43.881554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:45.884872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:45.889671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:47.893052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:47.897651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:49.900292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:49.904688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:51.908225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:51.913131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:53.923001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:53.930002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:55.933530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:55.938306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:57.941923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:57.946684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:59.949859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:35:59.957199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:01.960182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:36:01.965328       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-523976 -n addons-523976
helpers_test.go:269: (dbg) Run:  kubectl --context addons-523976 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-z2xx7 ingress-nginx-admission-patch-gd8wq registry-creds-764b6fb674-8qvgv
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-523976 describe pod ingress-nginx-admission-create-z2xx7 ingress-nginx-admission-patch-gd8wq registry-creds-764b6fb674-8qvgv
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-523976 describe pod ingress-nginx-admission-create-z2xx7 ingress-nginx-admission-patch-gd8wq registry-creds-764b6fb674-8qvgv: exit status 1 (93.084133ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-z2xx7" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-gd8wq" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-8qvgv" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-523976 describe pod ingress-nginx-admission-create-z2xx7 ingress-nginx-admission-patch-gd8wq registry-creds-764b6fb674-8qvgv: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-523976 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-523976 addons disable headlamp --alsologtostderr -v=1: exit status 11 (262.993212ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:36:03.060571  302137 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:36:03.061406  302137 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:36:03.061445  302137 out.go:374] Setting ErrFile to fd 2...
	I1025 09:36:03.061470  302137 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:36:03.061809  302137 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 09:36:03.062211  302137 mustload.go:65] Loading cluster: addons-523976
	I1025 09:36:03.062647  302137 config.go:182] Loaded profile config "addons-523976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:36:03.062688  302137 addons.go:606] checking whether the cluster is paused
	I1025 09:36:03.062834  302137 config.go:182] Loaded profile config "addons-523976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:36:03.062867  302137 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:36:03.063460  302137 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:36:03.082112  302137 ssh_runner.go:195] Run: systemctl --version
	I1025 09:36:03.082176  302137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:36:03.100399  302137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:36:03.210088  302137 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:36:03.210173  302137 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:36:03.241679  302137 cri.go:89] found id: "ed2b16c18354ffa6267ae4706c62ecb69903697108528bf3863d2c2f803003ab"
	I1025 09:36:03.241742  302137 cri.go:89] found id: "5aa333b5df5f3ee286cd80ea5e23a63933f660a1b393c5976044b6e482a2c97d"
	I1025 09:36:03.241761  302137 cri.go:89] found id: "7fa48c16691b308c6bf2eb952b017336f242e28234f560131888e2d56e48e118"
	I1025 09:36:03.241783  302137 cri.go:89] found id: "dba2f43dd64c7e26ab6c50f62acff63a9a44dd4cb23eb4a87be444c1da4780f8"
	I1025 09:36:03.241824  302137 cri.go:89] found id: "a28457a8a0b99896fc7bf9cfedc8e6bb01604c4ff4898d1fa276c7414283da93"
	I1025 09:36:03.241850  302137 cri.go:89] found id: "50e4b1142cbe65528bd1fbea23f4a7cf8003971e787273be5ee9d761dd7d289a"
	I1025 09:36:03.241869  302137 cri.go:89] found id: "39f4662ae9889bd547a3a4f4113124067f374286ab741f16041dc41fb4114334"
	I1025 09:36:03.241890  302137 cri.go:89] found id: "1b9e3466e10d18a916b8b3ec8e9512ca940197df8b87e324321530625c61950d"
	I1025 09:36:03.241910  302137 cri.go:89] found id: "0fb942eed357aedba394a71016114374a6b2ed13681c0c5c38b14300e6a78c97"
	I1025 09:36:03.241939  302137 cri.go:89] found id: "cf39d704be2f7601f61a7c4b76d66ca29278bce7cb61230b512dc0e395167c83"
	I1025 09:36:03.241962  302137 cri.go:89] found id: "76b94cc8a56cdd1dd2786cce2dee6201336592abca494e3c2d0a0c44be0abba4"
	I1025 09:36:03.241982  302137 cri.go:89] found id: "bd4a3acf65df2db5f82683dcc498529829e037415304a75de206142446234ac5"
	I1025 09:36:03.242002  302137 cri.go:89] found id: "3552646870f0354b62b73c1619bc8ef8234250c45937619a3d46141689e64913"
	I1025 09:36:03.242022  302137 cri.go:89] found id: "1671d001e906d6b3e01b5bc2e97abf3d2e04f6eb1403aa9fa3280a571355cb59"
	I1025 09:36:03.242049  302137 cri.go:89] found id: "526fdc9a670ac5982ffa9e1688d32a36fac0c0a2596260db2fb60eec5c732d15"
	I1025 09:36:03.242076  302137 cri.go:89] found id: "baef1ffd7c044365d9d03e5ce421585250bb6e1abf553ca3388e8ed456f0238b"
	I1025 09:36:03.242105  302137 cri.go:89] found id: "08942e9bb2ed5fa71628fa7fb77f43e33bb0c653db62df711678b474d20c96ec"
	I1025 09:36:03.242125  302137 cri.go:89] found id: "1522372d280f9cea2d797057a2ac964089f52ed80759c765101c5ae665e9beaa"
	I1025 09:36:03.242155  302137 cri.go:89] found id: "a34c99943c9360cd74f2410f3e1dc4a77dcfde99ee71da412b0ee5916e892d1b"
	I1025 09:36:03.242181  302137 cri.go:89] found id: "dc95e59147e2eec17d430ab406fa8f6b59a62de8f50462f7a609116945f95fd1"
	I1025 09:36:03.242206  302137 cri.go:89] found id: "3f9ccf0f1d26a6dd05ec9d553991a3c6f9f4da2ba3138959689b1a91c99e7666"
	I1025 09:36:03.242225  302137 cri.go:89] found id: "20bbcad0ad16d6b2ae5b3ca2e2ab5ba1053638cd0640b1fb4d021b25a50f8234"
	I1025 09:36:03.242244  302137 cri.go:89] found id: "78f4542e06d9b5434e621ac06a3ce1eec88a9d17fee72f530244bb01d81a42bb"
	I1025 09:36:03.242274  302137 cri.go:89] found id: ""
	I1025 09:36:03.242356  302137 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:36:03.257222  302137 out.go:203] 
	W1025 09:36:03.260145  302137 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:36:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:36:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:36:03.260208  302137 out.go:285] * 
	* 
	W1025 09:36:03.266517  302137 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:36:03.269478  302137 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-523976 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.86s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.34s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-4j9jb" [2e71e0c7-9885-45d2-bc2e-aa5bcab096c4] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.006259379s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-523976 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-523976 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (323.277634ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:35:59.170568  301501 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:35:59.172314  301501 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:59.172367  301501 out.go:374] Setting ErrFile to fd 2...
	I1025 09:35:59.172388  301501 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:59.172694  301501 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 09:35:59.173032  301501 mustload.go:65] Loading cluster: addons-523976
	I1025 09:35:59.173437  301501 config.go:182] Loaded profile config "addons-523976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:59.173470  301501 addons.go:606] checking whether the cluster is paused
	I1025 09:35:59.173595  301501 config.go:182] Loaded profile config "addons-523976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:59.173634  301501 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:35:59.174108  301501 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:35:59.193183  301501 ssh_runner.go:195] Run: systemctl --version
	I1025 09:35:59.193239  301501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:35:59.215292  301501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:35:59.333666  301501 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:35:59.333763  301501 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:35:59.379844  301501 cri.go:89] found id: "ed2b16c18354ffa6267ae4706c62ecb69903697108528bf3863d2c2f803003ab"
	I1025 09:35:59.379868  301501 cri.go:89] found id: "5aa333b5df5f3ee286cd80ea5e23a63933f660a1b393c5976044b6e482a2c97d"
	I1025 09:35:59.379873  301501 cri.go:89] found id: "7fa48c16691b308c6bf2eb952b017336f242e28234f560131888e2d56e48e118"
	I1025 09:35:59.379883  301501 cri.go:89] found id: "dba2f43dd64c7e26ab6c50f62acff63a9a44dd4cb23eb4a87be444c1da4780f8"
	I1025 09:35:59.379887  301501 cri.go:89] found id: "a28457a8a0b99896fc7bf9cfedc8e6bb01604c4ff4898d1fa276c7414283da93"
	I1025 09:35:59.379891  301501 cri.go:89] found id: "50e4b1142cbe65528bd1fbea23f4a7cf8003971e787273be5ee9d761dd7d289a"
	I1025 09:35:59.379893  301501 cri.go:89] found id: "39f4662ae9889bd547a3a4f4113124067f374286ab741f16041dc41fb4114334"
	I1025 09:35:59.379898  301501 cri.go:89] found id: "1b9e3466e10d18a916b8b3ec8e9512ca940197df8b87e324321530625c61950d"
	I1025 09:35:59.379900  301501 cri.go:89] found id: "0fb942eed357aedba394a71016114374a6b2ed13681c0c5c38b14300e6a78c97"
	I1025 09:35:59.379907  301501 cri.go:89] found id: "cf39d704be2f7601f61a7c4b76d66ca29278bce7cb61230b512dc0e395167c83"
	I1025 09:35:59.379910  301501 cri.go:89] found id: "76b94cc8a56cdd1dd2786cce2dee6201336592abca494e3c2d0a0c44be0abba4"
	I1025 09:35:59.379913  301501 cri.go:89] found id: "bd4a3acf65df2db5f82683dcc498529829e037415304a75de206142446234ac5"
	I1025 09:35:59.379917  301501 cri.go:89] found id: "3552646870f0354b62b73c1619bc8ef8234250c45937619a3d46141689e64913"
	I1025 09:35:59.379921  301501 cri.go:89] found id: "1671d001e906d6b3e01b5bc2e97abf3d2e04f6eb1403aa9fa3280a571355cb59"
	I1025 09:35:59.379927  301501 cri.go:89] found id: "526fdc9a670ac5982ffa9e1688d32a36fac0c0a2596260db2fb60eec5c732d15"
	I1025 09:35:59.379932  301501 cri.go:89] found id: "baef1ffd7c044365d9d03e5ce421585250bb6e1abf553ca3388e8ed456f0238b"
	I1025 09:35:59.379935  301501 cri.go:89] found id: "08942e9bb2ed5fa71628fa7fb77f43e33bb0c653db62df711678b474d20c96ec"
	I1025 09:35:59.379939  301501 cri.go:89] found id: "1522372d280f9cea2d797057a2ac964089f52ed80759c765101c5ae665e9beaa"
	I1025 09:35:59.379942  301501 cri.go:89] found id: "a34c99943c9360cd74f2410f3e1dc4a77dcfde99ee71da412b0ee5916e892d1b"
	I1025 09:35:59.379945  301501 cri.go:89] found id: "dc95e59147e2eec17d430ab406fa8f6b59a62de8f50462f7a609116945f95fd1"
	I1025 09:35:59.379949  301501 cri.go:89] found id: "3f9ccf0f1d26a6dd05ec9d553991a3c6f9f4da2ba3138959689b1a91c99e7666"
	I1025 09:35:59.379952  301501 cri.go:89] found id: "20bbcad0ad16d6b2ae5b3ca2e2ab5ba1053638cd0640b1fb4d021b25a50f8234"
	I1025 09:35:59.379955  301501 cri.go:89] found id: "78f4542e06d9b5434e621ac06a3ce1eec88a9d17fee72f530244bb01d81a42bb"
	I1025 09:35:59.379958  301501 cri.go:89] found id: ""
	I1025 09:35:59.380012  301501 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:35:59.397890  301501 out.go:203] 
	W1025 09:35:59.400823  301501 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:35:59.400848  301501 out.go:285] * 
	* 
	W1025 09:35:59.408046  301501 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:35:59.410869  301501 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-523976 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.34s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.52s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-523976 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-523976 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc test-pvc -o jsonpath={.status.phase} -n default
2025/10/25 09:35:53 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-523976 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [bd6315a3-2fe0-4869-9a97-1a8c91efdc03] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [bd6315a3-2fe0-4869-9a97-1a8c91efdc03] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [bd6315a3-2fe0-4869-9a97-1a8c91efdc03] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003370436s
addons_test.go:967: (dbg) Run:  kubectl --context addons-523976 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-523976 ssh "cat /opt/local-path-provisioner/pvc-e0338399-28dc-478f-89a3-735d9bdcfa58_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-523976 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-523976 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-523976 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-523976 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (334.88748ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:35:59.292912  301524 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:35:59.293723  301524 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:59.293755  301524 out.go:374] Setting ErrFile to fd 2...
	I1025 09:35:59.293779  301524 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:59.294631  301524 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 09:35:59.294978  301524 mustload.go:65] Loading cluster: addons-523976
	I1025 09:35:59.295420  301524 config.go:182] Loaded profile config "addons-523976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:59.295462  301524 addons.go:606] checking whether the cluster is paused
	I1025 09:35:59.295594  301524 config.go:182] Loaded profile config "addons-523976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:59.295629  301524 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:35:59.296152  301524 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:35:59.312876  301524 ssh_runner.go:195] Run: systemctl --version
	I1025 09:35:59.312955  301524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:35:59.334854  301524 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:35:59.445696  301524 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:35:59.445784  301524 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:35:59.504957  301524 cri.go:89] found id: "ed2b16c18354ffa6267ae4706c62ecb69903697108528bf3863d2c2f803003ab"
	I1025 09:35:59.504976  301524 cri.go:89] found id: "5aa333b5df5f3ee286cd80ea5e23a63933f660a1b393c5976044b6e482a2c97d"
	I1025 09:35:59.504981  301524 cri.go:89] found id: "7fa48c16691b308c6bf2eb952b017336f242e28234f560131888e2d56e48e118"
	I1025 09:35:59.504985  301524 cri.go:89] found id: "dba2f43dd64c7e26ab6c50f62acff63a9a44dd4cb23eb4a87be444c1da4780f8"
	I1025 09:35:59.504988  301524 cri.go:89] found id: "a28457a8a0b99896fc7bf9cfedc8e6bb01604c4ff4898d1fa276c7414283da93"
	I1025 09:35:59.504992  301524 cri.go:89] found id: "50e4b1142cbe65528bd1fbea23f4a7cf8003971e787273be5ee9d761dd7d289a"
	I1025 09:35:59.504995  301524 cri.go:89] found id: "39f4662ae9889bd547a3a4f4113124067f374286ab741f16041dc41fb4114334"
	I1025 09:35:59.504998  301524 cri.go:89] found id: "1b9e3466e10d18a916b8b3ec8e9512ca940197df8b87e324321530625c61950d"
	I1025 09:35:59.505001  301524 cri.go:89] found id: "0fb942eed357aedba394a71016114374a6b2ed13681c0c5c38b14300e6a78c97"
	I1025 09:35:59.505008  301524 cri.go:89] found id: "cf39d704be2f7601f61a7c4b76d66ca29278bce7cb61230b512dc0e395167c83"
	I1025 09:35:59.505012  301524 cri.go:89] found id: "76b94cc8a56cdd1dd2786cce2dee6201336592abca494e3c2d0a0c44be0abba4"
	I1025 09:35:59.505016  301524 cri.go:89] found id: "bd4a3acf65df2db5f82683dcc498529829e037415304a75de206142446234ac5"
	I1025 09:35:59.505019  301524 cri.go:89] found id: "3552646870f0354b62b73c1619bc8ef8234250c45937619a3d46141689e64913"
	I1025 09:35:59.505022  301524 cri.go:89] found id: "1671d001e906d6b3e01b5bc2e97abf3d2e04f6eb1403aa9fa3280a571355cb59"
	I1025 09:35:59.505025  301524 cri.go:89] found id: "526fdc9a670ac5982ffa9e1688d32a36fac0c0a2596260db2fb60eec5c732d15"
	I1025 09:35:59.505030  301524 cri.go:89] found id: "baef1ffd7c044365d9d03e5ce421585250bb6e1abf553ca3388e8ed456f0238b"
	I1025 09:35:59.505033  301524 cri.go:89] found id: "08942e9bb2ed5fa71628fa7fb77f43e33bb0c653db62df711678b474d20c96ec"
	I1025 09:35:59.505037  301524 cri.go:89] found id: "1522372d280f9cea2d797057a2ac964089f52ed80759c765101c5ae665e9beaa"
	I1025 09:35:59.505040  301524 cri.go:89] found id: "a34c99943c9360cd74f2410f3e1dc4a77dcfde99ee71da412b0ee5916e892d1b"
	I1025 09:35:59.505043  301524 cri.go:89] found id: "dc95e59147e2eec17d430ab406fa8f6b59a62de8f50462f7a609116945f95fd1"
	I1025 09:35:59.505047  301524 cri.go:89] found id: "3f9ccf0f1d26a6dd05ec9d553991a3c6f9f4da2ba3138959689b1a91c99e7666"
	I1025 09:35:59.505051  301524 cri.go:89] found id: "20bbcad0ad16d6b2ae5b3ca2e2ab5ba1053638cd0640b1fb4d021b25a50f8234"
	I1025 09:35:59.505054  301524 cri.go:89] found id: "78f4542e06d9b5434e621ac06a3ce1eec88a9d17fee72f530244bb01d81a42bb"
	I1025 09:35:59.505056  301524 cri.go:89] found id: ""
	I1025 09:35:59.505106  301524 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:35:59.541663  301524 out.go:203] 
	W1025 09:35:59.545078  301524 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:35:59.545106  301524 out.go:285] * 
	* 
	W1025 09:35:59.554514  301524 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:35:59.561381  301524 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-523976 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.52s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.33s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-bc95g" [7c8dd890-af51-4a2a-97fb-d8cdb2ca8d45] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00390148s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-523976 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-523976 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (320.878977ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:35:50.796398  301088 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:35:50.797485  301088 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:50.797502  301088 out.go:374] Setting ErrFile to fd 2...
	I1025 09:35:50.797507  301088 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:50.797809  301088 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 09:35:50.798100  301088 mustload.go:65] Loading cluster: addons-523976
	I1025 09:35:50.798488  301088 config.go:182] Loaded profile config "addons-523976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:50.798498  301088 addons.go:606] checking whether the cluster is paused
	I1025 09:35:50.798599  301088 config.go:182] Loaded profile config "addons-523976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:50.798608  301088 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:35:50.799054  301088 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:35:50.828573  301088 ssh_runner.go:195] Run: systemctl --version
	I1025 09:35:50.828635  301088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:35:50.848744  301088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:35:50.958388  301088 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:35:50.958477  301088 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:35:50.991018  301088 cri.go:89] found id: "ed2b16c18354ffa6267ae4706c62ecb69903697108528bf3863d2c2f803003ab"
	I1025 09:35:50.991049  301088 cri.go:89] found id: "5aa333b5df5f3ee286cd80ea5e23a63933f660a1b393c5976044b6e482a2c97d"
	I1025 09:35:50.991055  301088 cri.go:89] found id: "7fa48c16691b308c6bf2eb952b017336f242e28234f560131888e2d56e48e118"
	I1025 09:35:50.991059  301088 cri.go:89] found id: "dba2f43dd64c7e26ab6c50f62acff63a9a44dd4cb23eb4a87be444c1da4780f8"
	I1025 09:35:50.991079  301088 cri.go:89] found id: "a28457a8a0b99896fc7bf9cfedc8e6bb01604c4ff4898d1fa276c7414283da93"
	I1025 09:35:50.991090  301088 cri.go:89] found id: "50e4b1142cbe65528bd1fbea23f4a7cf8003971e787273be5ee9d761dd7d289a"
	I1025 09:35:50.991100  301088 cri.go:89] found id: "39f4662ae9889bd547a3a4f4113124067f374286ab741f16041dc41fb4114334"
	I1025 09:35:50.991104  301088 cri.go:89] found id: "1b9e3466e10d18a916b8b3ec8e9512ca940197df8b87e324321530625c61950d"
	I1025 09:35:50.991107  301088 cri.go:89] found id: "0fb942eed357aedba394a71016114374a6b2ed13681c0c5c38b14300e6a78c97"
	I1025 09:35:50.991113  301088 cri.go:89] found id: "cf39d704be2f7601f61a7c4b76d66ca29278bce7cb61230b512dc0e395167c83"
	I1025 09:35:50.991121  301088 cri.go:89] found id: "76b94cc8a56cdd1dd2786cce2dee6201336592abca494e3c2d0a0c44be0abba4"
	I1025 09:35:50.991125  301088 cri.go:89] found id: "bd4a3acf65df2db5f82683dcc498529829e037415304a75de206142446234ac5"
	I1025 09:35:50.991128  301088 cri.go:89] found id: "3552646870f0354b62b73c1619bc8ef8234250c45937619a3d46141689e64913"
	I1025 09:35:50.991131  301088 cri.go:89] found id: "1671d001e906d6b3e01b5bc2e97abf3d2e04f6eb1403aa9fa3280a571355cb59"
	I1025 09:35:50.991135  301088 cri.go:89] found id: "526fdc9a670ac5982ffa9e1688d32a36fac0c0a2596260db2fb60eec5c732d15"
	I1025 09:35:50.991140  301088 cri.go:89] found id: "baef1ffd7c044365d9d03e5ce421585250bb6e1abf553ca3388e8ed456f0238b"
	I1025 09:35:50.991184  301088 cri.go:89] found id: "08942e9bb2ed5fa71628fa7fb77f43e33bb0c653db62df711678b474d20c96ec"
	I1025 09:35:50.991190  301088 cri.go:89] found id: "1522372d280f9cea2d797057a2ac964089f52ed80759c765101c5ae665e9beaa"
	I1025 09:35:50.991193  301088 cri.go:89] found id: "a34c99943c9360cd74f2410f3e1dc4a77dcfde99ee71da412b0ee5916e892d1b"
	I1025 09:35:50.991197  301088 cri.go:89] found id: "dc95e59147e2eec17d430ab406fa8f6b59a62de8f50462f7a609116945f95fd1"
	I1025 09:35:50.991203  301088 cri.go:89] found id: "3f9ccf0f1d26a6dd05ec9d553991a3c6f9f4da2ba3138959689b1a91c99e7666"
	I1025 09:35:50.991209  301088 cri.go:89] found id: "20bbcad0ad16d6b2ae5b3ca2e2ab5ba1053638cd0640b1fb4d021b25a50f8234"
	I1025 09:35:50.991212  301088 cri.go:89] found id: "78f4542e06d9b5434e621ac06a3ce1eec88a9d17fee72f530244bb01d81a42bb"
	I1025 09:35:50.991215  301088 cri.go:89] found id: ""
	I1025 09:35:50.991283  301088 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:35:51.024611  301088 out.go:203] 
	W1025 09:35:51.027750  301088 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:35:51.027791  301088 out.go:285] * 
	* 
	W1025 09:35:51.034295  301088 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:35:51.037836  301088 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-523976 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.33s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-4p7gh" [9e1929ad-7b35-4c7c-939b-79c7ed978496] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003383049s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-523976 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-523976 addons disable yakd --alsologtostderr -v=1: exit status 11 (262.133686ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:35:44.506646  300996 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:35:44.507525  300996 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:44.507569  300996 out.go:374] Setting ErrFile to fd 2...
	I1025 09:35:44.507591  300996 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:35:44.507878  300996 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 09:35:44.508245  300996 mustload.go:65] Loading cluster: addons-523976
	I1025 09:35:44.508651  300996 config.go:182] Loaded profile config "addons-523976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:44.508693  300996 addons.go:606] checking whether the cluster is paused
	I1025 09:35:44.508822  300996 config.go:182] Loaded profile config "addons-523976": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:35:44.508856  300996 host.go:66] Checking if "addons-523976" exists ...
	I1025 09:35:44.509339  300996 cli_runner.go:164] Run: docker container inspect addons-523976 --format={{.State.Status}}
	I1025 09:35:44.526541  300996 ssh_runner.go:195] Run: systemctl --version
	I1025 09:35:44.526593  300996 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-523976
	I1025 09:35:44.548811  300996 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33142 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/addons-523976/id_rsa Username:docker}
	I1025 09:35:44.655079  300996 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:35:44.655190  300996 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:35:44.683388  300996 cri.go:89] found id: "ed2b16c18354ffa6267ae4706c62ecb69903697108528bf3863d2c2f803003ab"
	I1025 09:35:44.683412  300996 cri.go:89] found id: "5aa333b5df5f3ee286cd80ea5e23a63933f660a1b393c5976044b6e482a2c97d"
	I1025 09:35:44.683418  300996 cri.go:89] found id: "7fa48c16691b308c6bf2eb952b017336f242e28234f560131888e2d56e48e118"
	I1025 09:35:44.683422  300996 cri.go:89] found id: "dba2f43dd64c7e26ab6c50f62acff63a9a44dd4cb23eb4a87be444c1da4780f8"
	I1025 09:35:44.683425  300996 cri.go:89] found id: "a28457a8a0b99896fc7bf9cfedc8e6bb01604c4ff4898d1fa276c7414283da93"
	I1025 09:35:44.683429  300996 cri.go:89] found id: "50e4b1142cbe65528bd1fbea23f4a7cf8003971e787273be5ee9d761dd7d289a"
	I1025 09:35:44.683432  300996 cri.go:89] found id: "39f4662ae9889bd547a3a4f4113124067f374286ab741f16041dc41fb4114334"
	I1025 09:35:44.683436  300996 cri.go:89] found id: "1b9e3466e10d18a916b8b3ec8e9512ca940197df8b87e324321530625c61950d"
	I1025 09:35:44.683439  300996 cri.go:89] found id: "0fb942eed357aedba394a71016114374a6b2ed13681c0c5c38b14300e6a78c97"
	I1025 09:35:44.683446  300996 cri.go:89] found id: "cf39d704be2f7601f61a7c4b76d66ca29278bce7cb61230b512dc0e395167c83"
	I1025 09:35:44.683450  300996 cri.go:89] found id: "76b94cc8a56cdd1dd2786cce2dee6201336592abca494e3c2d0a0c44be0abba4"
	I1025 09:35:44.683453  300996 cri.go:89] found id: "bd4a3acf65df2db5f82683dcc498529829e037415304a75de206142446234ac5"
	I1025 09:35:44.683457  300996 cri.go:89] found id: "3552646870f0354b62b73c1619bc8ef8234250c45937619a3d46141689e64913"
	I1025 09:35:44.683460  300996 cri.go:89] found id: "1671d001e906d6b3e01b5bc2e97abf3d2e04f6eb1403aa9fa3280a571355cb59"
	I1025 09:35:44.683464  300996 cri.go:89] found id: "526fdc9a670ac5982ffa9e1688d32a36fac0c0a2596260db2fb60eec5c732d15"
	I1025 09:35:44.683469  300996 cri.go:89] found id: "baef1ffd7c044365d9d03e5ce421585250bb6e1abf553ca3388e8ed456f0238b"
	I1025 09:35:44.683477  300996 cri.go:89] found id: "08942e9bb2ed5fa71628fa7fb77f43e33bb0c653db62df711678b474d20c96ec"
	I1025 09:35:44.683481  300996 cri.go:89] found id: "1522372d280f9cea2d797057a2ac964089f52ed80759c765101c5ae665e9beaa"
	I1025 09:35:44.683484  300996 cri.go:89] found id: "a34c99943c9360cd74f2410f3e1dc4a77dcfde99ee71da412b0ee5916e892d1b"
	I1025 09:35:44.683489  300996 cri.go:89] found id: "dc95e59147e2eec17d430ab406fa8f6b59a62de8f50462f7a609116945f95fd1"
	I1025 09:35:44.683501  300996 cri.go:89] found id: "3f9ccf0f1d26a6dd05ec9d553991a3c6f9f4da2ba3138959689b1a91c99e7666"
	I1025 09:35:44.683508  300996 cri.go:89] found id: "20bbcad0ad16d6b2ae5b3ca2e2ab5ba1053638cd0640b1fb4d021b25a50f8234"
	I1025 09:35:44.683511  300996 cri.go:89] found id: "78f4542e06d9b5434e621ac06a3ce1eec88a9d17fee72f530244bb01d81a42bb"
	I1025 09:35:44.683514  300996 cri.go:89] found id: ""
	I1025 09:35:44.683566  300996 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 09:35:44.699346  300996 out.go:203] 
	W1025 09:35:44.702238  300996 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:35:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 09:35:44.702267  300996 out.go:285] * 
	* 
	W1025 09:35:44.709188  300996 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 09:35:44.712269  300996 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-523976 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-900552 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-900552 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-v99hn" [a515350f-e8e3-4b6c-b338-b6849cb96b9e] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-900552 -n functional-900552
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-25 09:52:34.403880845 +0000 UTC m=+1213.366064346
functional_test.go:1645: (dbg) Run:  kubectl --context functional-900552 describe po hello-node-connect-7d85dfc575-v99hn -n default
functional_test.go:1645: (dbg) kubectl --context functional-900552 describe po hello-node-connect-7d85dfc575-v99hn -n default:
Name:             hello-node-connect-7d85dfc575-v99hn
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-900552/192.168.49.2
Start Time:       Sat, 25 Oct 2025 09:42:34 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2tlkl (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-2tlkl:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-v99hn to functional-900552
Normal   Pulling    7m3s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m3s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m3s (x5 over 10m)      kubelet            Error: ErrImagePull
Normal   BackOff    4m55s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m55s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-900552 logs hello-node-connect-7d85dfc575-v99hn -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-900552 logs hello-node-connect-7d85dfc575-v99hn -n default: exit status 1 (99.935044ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-v99hn" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-900552 logs hello-node-connect-7d85dfc575-v99hn -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-900552 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-v99hn
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-900552/192.168.49.2
Start Time:       Sat, 25 Oct 2025 09:42:34 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2tlkl (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-2tlkl:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-v99hn to functional-900552
Normal   Pulling    7m3s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m3s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m3s (x5 over 10m)      kubelet            Error: ErrImagePull
Normal   BackOff    4m55s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m55s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-900552 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-900552 logs -l app=hello-node-connect: exit status 1 (85.408485ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-v99hn" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-900552 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-900552 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.108.215.61
IPs:                      10.108.215.61
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31057/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-900552
helpers_test.go:243: (dbg) docker inspect functional-900552:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7465521690d89f96db481880ba5e99571a1cd2f058bbfd8aa9c48d6e0621953f",
	        "Created": "2025-10-25T09:39:52.104052157Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 309798,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T09:39:52.167446381Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/7465521690d89f96db481880ba5e99571a1cd2f058bbfd8aa9c48d6e0621953f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7465521690d89f96db481880ba5e99571a1cd2f058bbfd8aa9c48d6e0621953f/hostname",
	        "HostsPath": "/var/lib/docker/containers/7465521690d89f96db481880ba5e99571a1cd2f058bbfd8aa9c48d6e0621953f/hosts",
	        "LogPath": "/var/lib/docker/containers/7465521690d89f96db481880ba5e99571a1cd2f058bbfd8aa9c48d6e0621953f/7465521690d89f96db481880ba5e99571a1cd2f058bbfd8aa9c48d6e0621953f-json.log",
	        "Name": "/functional-900552",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-900552:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-900552",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7465521690d89f96db481880ba5e99571a1cd2f058bbfd8aa9c48d6e0621953f",
	                "LowerDir": "/var/lib/docker/overlay2/f03f0daa094b93ccb3def608690915ca7d1459647521a12cd197e936aeec211b-init/diff:/var/lib/docker/overlay2/56244b4f60319ba406f5ebe406da3f45bd5764c167111669ae2d84794cf94518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f03f0daa094b93ccb3def608690915ca7d1459647521a12cd197e936aeec211b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f03f0daa094b93ccb3def608690915ca7d1459647521a12cd197e936aeec211b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f03f0daa094b93ccb3def608690915ca7d1459647521a12cd197e936aeec211b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-900552",
	                "Source": "/var/lib/docker/volumes/functional-900552/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-900552",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-900552",
	                "name.minikube.sigs.k8s.io": "functional-900552",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "23352e12243c4e11ec97c0afa4a6c7391f4ff9e1647ff08938bbef3ba3a0edfb",
	            "SandboxKey": "/var/run/docker/netns/23352e12243c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33153"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33156"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33154"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33155"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-900552": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:ac:4c:ab:b0:9c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "133397b2ad431f3a6ef6566abb8e4ebb9ae6321a89bcc1793bd5823e633e6904",
	                    "EndpointID": "e17bb525a28edcb65d9ffb09d2dcf93ef550ca9ab28bdb69380fd87cb71c2707",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-900552",
	                        "7465521690d8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-900552 -n functional-900552
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-900552 logs -n 25: (1.452587043s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-900552 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                    │ functional-900552 │ jenkins │ v1.37.0 │ 25 Oct 25 09:41 UTC │ 25 Oct 25 09:41 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 25 Oct 25 09:41 UTC │ 25 Oct 25 09:41 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 25 Oct 25 09:41 UTC │ 25 Oct 25 09:41 UTC │
	│ kubectl │ functional-900552 kubectl -- --context functional-900552 get pods                                                          │ functional-900552 │ jenkins │ v1.37.0 │ 25 Oct 25 09:41 UTC │ 25 Oct 25 09:41 UTC │
	│ start   │ -p functional-900552 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                   │ functional-900552 │ jenkins │ v1.37.0 │ 25 Oct 25 09:41 UTC │ 25 Oct 25 09:42 UTC │
	│ service │ invalid-svc -p functional-900552                                                                                           │ functional-900552 │ jenkins │ v1.37.0 │ 25 Oct 25 09:42 UTC │                     │
	│ config  │ functional-900552 config unset cpus                                                                                        │ functional-900552 │ jenkins │ v1.37.0 │ 25 Oct 25 09:42 UTC │ 25 Oct 25 09:42 UTC │
	│ cp      │ functional-900552 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-900552 │ jenkins │ v1.37.0 │ 25 Oct 25 09:42 UTC │ 25 Oct 25 09:42 UTC │
	│ config  │ functional-900552 config get cpus                                                                                          │ functional-900552 │ jenkins │ v1.37.0 │ 25 Oct 25 09:42 UTC │                     │
	│ config  │ functional-900552 config set cpus 2                                                                                        │ functional-900552 │ jenkins │ v1.37.0 │ 25 Oct 25 09:42 UTC │ 25 Oct 25 09:42 UTC │
	│ config  │ functional-900552 config get cpus                                                                                          │ functional-900552 │ jenkins │ v1.37.0 │ 25 Oct 25 09:42 UTC │ 25 Oct 25 09:42 UTC │
	│ config  │ functional-900552 config unset cpus                                                                                        │ functional-900552 │ jenkins │ v1.37.0 │ 25 Oct 25 09:42 UTC │ 25 Oct 25 09:42 UTC │
	│ config  │ functional-900552 config get cpus                                                                                          │ functional-900552 │ jenkins │ v1.37.0 │ 25 Oct 25 09:42 UTC │                     │
	│ ssh     │ functional-900552 ssh -n functional-900552 sudo cat /home/docker/cp-test.txt                                               │ functional-900552 │ jenkins │ v1.37.0 │ 25 Oct 25 09:42 UTC │ 25 Oct 25 09:42 UTC │
	│ ssh     │ functional-900552 ssh echo hello                                                                                           │ functional-900552 │ jenkins │ v1.37.0 │ 25 Oct 25 09:42 UTC │ 25 Oct 25 09:42 UTC │
	│ cp      │ functional-900552 cp functional-900552:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3199937329/001/cp-test.txt │ functional-900552 │ jenkins │ v1.37.0 │ 25 Oct 25 09:42 UTC │ 25 Oct 25 09:42 UTC │
	│ ssh     │ functional-900552 ssh cat /etc/hostname                                                                                    │ functional-900552 │ jenkins │ v1.37.0 │ 25 Oct 25 09:42 UTC │ 25 Oct 25 09:42 UTC │
	│ ssh     │ functional-900552 ssh -n functional-900552 sudo cat /home/docker/cp-test.txt                                               │ functional-900552 │ jenkins │ v1.37.0 │ 25 Oct 25 09:42 UTC │ 25 Oct 25 09:42 UTC │
	│ tunnel  │ functional-900552 tunnel --alsologtostderr                                                                                 │ functional-900552 │ jenkins │ v1.37.0 │ 25 Oct 25 09:42 UTC │                     │
	│ tunnel  │ functional-900552 tunnel --alsologtostderr                                                                                 │ functional-900552 │ jenkins │ v1.37.0 │ 25 Oct 25 09:42 UTC │                     │
	│ cp      │ functional-900552 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-900552 │ jenkins │ v1.37.0 │ 25 Oct 25 09:42 UTC │ 25 Oct 25 09:42 UTC │
	│ tunnel  │ functional-900552 tunnel --alsologtostderr                                                                                 │ functional-900552 │ jenkins │ v1.37.0 │ 25 Oct 25 09:42 UTC │                     │
	│ ssh     │ functional-900552 ssh -n functional-900552 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-900552 │ jenkins │ v1.37.0 │ 25 Oct 25 09:42 UTC │ 25 Oct 25 09:42 UTC │
	│ addons  │ functional-900552 addons list                                                                                              │ functional-900552 │ jenkins │ v1.37.0 │ 25 Oct 25 09:42 UTC │ 25 Oct 25 09:42 UTC │
	│ addons  │ functional-900552 addons list -o json                                                                                      │ functional-900552 │ jenkins │ v1.37.0 │ 25 Oct 25 09:42 UTC │ 25 Oct 25 09:42 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:41:40
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:41:40.481419  313923 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:41:40.481580  313923 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:41:40.481584  313923 out.go:374] Setting ErrFile to fd 2...
	I1025 09:41:40.481587  313923 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:41:40.481864  313923 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 09:41:40.482260  313923 out.go:368] Setting JSON to false
	I1025 09:41:40.483276  313923 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5050,"bootTime":1761380250,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1025 09:41:40.483384  313923 start.go:141] virtualization:  
	I1025 09:41:40.486980  313923 out.go:179] * [functional-900552] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 09:41:40.490143  313923 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 09:41:40.490238  313923 notify.go:220] Checking for updates...
	I1025 09:41:40.496315  313923 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:41:40.499341  313923 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 09:41:40.502286  313923 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-292167/.minikube
	I1025 09:41:40.505380  313923 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 09:41:40.508286  313923 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:41:40.511742  313923 config.go:182] Loaded profile config "functional-900552": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:41:40.511835  313923 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:41:40.547874  313923 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 09:41:40.548007  313923 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:41:40.616809  313923 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-25 09:41:40.607168919 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:41:40.616905  313923 docker.go:318] overlay module found
	I1025 09:41:40.620102  313923 out.go:179] * Using the docker driver based on existing profile
	I1025 09:41:40.623055  313923 start.go:305] selected driver: docker
	I1025 09:41:40.623073  313923 start.go:925] validating driver "docker" against &{Name:functional-900552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-900552 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:41:40.623248  313923 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:41:40.623360  313923 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:41:40.687255  313923 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-25 09:41:40.677846965 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:41:40.687678  313923 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:41:40.687696  313923 cni.go:84] Creating CNI manager for ""
	I1025 09:41:40.687759  313923 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:41:40.687798  313923 start.go:349] cluster config:
	{Name:functional-900552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-900552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:41:40.691044  313923 out.go:179] * Starting "functional-900552" primary control-plane node in "functional-900552" cluster
	I1025 09:41:40.693880  313923 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:41:40.696666  313923 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:41:40.699436  313923 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:41:40.699483  313923 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 09:41:40.699491  313923 cache.go:58] Caching tarball of preloaded images
	I1025 09:41:40.699530  313923 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:41:40.699576  313923 preload.go:233] Found /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 09:41:40.699595  313923 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 09:41:40.699707  313923 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/functional-900552/config.json ...
	I1025 09:41:40.736216  313923 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 09:41:40.736229  313923 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 09:41:40.736248  313923 cache.go:232] Successfully downloaded all kic artifacts
	I1025 09:41:40.736270  313923 start.go:360] acquireMachinesLock for functional-900552: {Name:mka64a8c53eb69be3ab4623ebe0f1b6832849081 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 09:41:40.736336  313923 start.go:364] duration metric: took 49.716µs to acquireMachinesLock for "functional-900552"
	I1025 09:41:40.736355  313923 start.go:96] Skipping create...Using existing machine configuration
	I1025 09:41:40.736360  313923 fix.go:54] fixHost starting: 
	I1025 09:41:40.736621  313923 cli_runner.go:164] Run: docker container inspect functional-900552 --format={{.State.Status}}
	I1025 09:41:40.756214  313923 fix.go:112] recreateIfNeeded on functional-900552: state=Running err=<nil>
	W1025 09:41:40.756235  313923 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 09:41:40.759408  313923 out.go:252] * Updating the running docker "functional-900552" container ...
	I1025 09:41:40.759444  313923 machine.go:93] provisionDockerMachine start ...
	I1025 09:41:40.759520  313923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-900552
	I1025 09:41:40.777248  313923 main.go:141] libmachine: Using SSH client type: native
	I1025 09:41:40.777612  313923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33152 <nil> <nil>}
	I1025 09:41:40.777620  313923 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 09:41:40.926769  313923 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-900552
	
	I1025 09:41:40.926791  313923 ubuntu.go:182] provisioning hostname "functional-900552"
	I1025 09:41:40.926864  313923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-900552
	I1025 09:41:40.945440  313923 main.go:141] libmachine: Using SSH client type: native
	I1025 09:41:40.945746  313923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33152 <nil> <nil>}
	I1025 09:41:40.945755  313923 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-900552 && echo "functional-900552" | sudo tee /etc/hostname
	I1025 09:41:41.110243  313923 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-900552
	
	I1025 09:41:41.110312  313923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-900552
	I1025 09:41:41.129478  313923 main.go:141] libmachine: Using SSH client type: native
	I1025 09:41:41.129782  313923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33152 <nil> <nil>}
	I1025 09:41:41.129797  313923 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-900552' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-900552/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-900552' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 09:41:41.279497  313923 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 09:41:41.279527  313923 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-292167/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-292167/.minikube}
	I1025 09:41:41.279549  313923 ubuntu.go:190] setting up certificates
	I1025 09:41:41.279558  313923 provision.go:84] configureAuth start
	I1025 09:41:41.279622  313923 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-900552
	I1025 09:41:41.297872  313923 provision.go:143] copyHostCerts
	I1025 09:41:41.297943  313923 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem, removing ...
	I1025 09:41:41.297967  313923 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem
	I1025 09:41:41.298096  313923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem (1123 bytes)
	I1025 09:41:41.298223  313923 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem, removing ...
	I1025 09:41:41.298239  313923 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem
	I1025 09:41:41.298276  313923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem (1675 bytes)
	I1025 09:41:41.298360  313923 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem, removing ...
	I1025 09:41:41.298368  313923 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem
	I1025 09:41:41.298395  313923 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem (1082 bytes)
	I1025 09:41:41.298456  313923 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem org=jenkins.functional-900552 san=[127.0.0.1 192.168.49.2 functional-900552 localhost minikube]
	I1025 09:41:41.417182  313923 provision.go:177] copyRemoteCerts
	I1025 09:41:41.417237  313923 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 09:41:41.417276  313923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-900552
	I1025 09:41:41.435171  313923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/functional-900552/id_rsa Username:docker}
	I1025 09:41:41.539351  313923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 09:41:41.560373  313923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 09:41:41.578219  313923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 09:41:41.595694  313923 provision.go:87] duration metric: took 316.122266ms to configureAuth
	I1025 09:41:41.595710  313923 ubuntu.go:206] setting minikube options for container-runtime
	I1025 09:41:41.595899  313923 config.go:182] Loaded profile config "functional-900552": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:41:41.596004  313923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-900552
	I1025 09:41:41.614467  313923 main.go:141] libmachine: Using SSH client type: native
	I1025 09:41:41.614784  313923 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33152 <nil> <nil>}
	I1025 09:41:41.614800  313923 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 09:41:46.989096  313923 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 09:41:46.989108  313923 machine.go:96] duration metric: took 6.229657768s to provisionDockerMachine
	I1025 09:41:46.989117  313923 start.go:293] postStartSetup for "functional-900552" (driver="docker")
	I1025 09:41:46.989128  313923 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 09:41:46.989207  313923 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 09:41:46.989243  313923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-900552
	I1025 09:41:47.007835  313923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/functional-900552/id_rsa Username:docker}
	I1025 09:41:47.110892  313923 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 09:41:47.114041  313923 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 09:41:47.114059  313923 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 09:41:47.114068  313923 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/addons for local assets ...
	I1025 09:41:47.114121  313923 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/files for local assets ...
	I1025 09:41:47.114191  313923 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem -> 2940172.pem in /etc/ssl/certs
	I1025 09:41:47.114263  313923 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/test/nested/copy/294017/hosts -> hosts in /etc/test/nested/copy/294017
	I1025 09:41:47.114306  313923 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/294017
	I1025 09:41:47.121701  313923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 09:41:47.139130  313923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/test/nested/copy/294017/hosts --> /etc/test/nested/copy/294017/hosts (40 bytes)
	I1025 09:41:47.156760  313923 start.go:296] duration metric: took 167.629076ms for postStartSetup
	I1025 09:41:47.156830  313923 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:41:47.156868  313923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-900552
	I1025 09:41:47.173025  313923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/functional-900552/id_rsa Username:docker}
	I1025 09:41:47.272895  313923 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 09:41:47.278004  313923 fix.go:56] duration metric: took 6.541631984s for fixHost
	I1025 09:41:47.278026  313923 start.go:83] releasing machines lock for "functional-900552", held for 6.541681928s
	I1025 09:41:47.278118  313923 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-900552
	I1025 09:41:47.294914  313923 ssh_runner.go:195] Run: cat /version.json
	I1025 09:41:47.294955  313923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-900552
	I1025 09:41:47.295299  313923 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 09:41:47.295360  313923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-900552
	I1025 09:41:47.318029  313923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/functional-900552/id_rsa Username:docker}
	I1025 09:41:47.323292  313923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/functional-900552/id_rsa Username:docker}
	I1025 09:41:47.418985  313923 ssh_runner.go:195] Run: systemctl --version
	I1025 09:41:47.507876  313923 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 09:41:47.542264  313923 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 09:41:47.546611  313923 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 09:41:47.546669  313923 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 09:41:47.554496  313923 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 09:41:47.554509  313923 start.go:495] detecting cgroup driver to use...
	I1025 09:41:47.554540  313923 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 09:41:47.554590  313923 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 09:41:47.570364  313923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 09:41:47.583343  313923 docker.go:218] disabling cri-docker service (if available) ...
	I1025 09:41:47.583394  313923 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 09:41:47.598780  313923 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 09:41:47.612111  313923 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 09:41:47.743133  313923 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 09:41:47.875320  313923 docker.go:234] disabling docker service ...
	I1025 09:41:47.875375  313923 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 09:41:47.890175  313923 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 09:41:47.902726  313923 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 09:41:48.030428  313923 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 09:41:48.160054  313923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 09:41:48.173388  313923 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 09:41:48.187734  313923 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 09:41:48.187802  313923 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:41:48.196660  313923 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 09:41:48.196735  313923 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:41:48.205470  313923 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:41:48.214163  313923 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:41:48.222870  313923 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 09:41:48.230723  313923 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:41:48.239389  313923 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:41:48.247652  313923 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 09:41:48.256717  313923 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 09:41:48.264454  313923 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 09:41:48.271505  313923 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:41:48.401065  313923 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 09:41:54.893833  313923 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.492743099s)
	I1025 09:41:54.893851  313923 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 09:41:54.893909  313923 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 09:41:54.898118  313923 start.go:563] Will wait 60s for crictl version
	I1025 09:41:54.898169  313923 ssh_runner.go:195] Run: which crictl
	I1025 09:41:54.901751  313923 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 09:41:54.925384  313923 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 09:41:54.925457  313923 ssh_runner.go:195] Run: crio --version
	I1025 09:41:54.952601  313923 ssh_runner.go:195] Run: crio --version
	I1025 09:41:54.983912  313923 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 09:41:54.986972  313923 cli_runner.go:164] Run: docker network inspect functional-900552 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 09:41:55.010264  313923 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1025 09:41:55.018899  313923 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1025 09:41:55.021802  313923 kubeadm.go:883] updating cluster {Name:functional-900552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-900552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 09:41:55.021930  313923 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 09:41:55.022016  313923 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:41:55.056882  313923 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:41:55.056894  313923 crio.go:433] Images already preloaded, skipping extraction
	I1025 09:41:55.056947  313923 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 09:41:55.085682  313923 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 09:41:55.085692  313923 cache_images.go:85] Images are preloaded, skipping loading
	I1025 09:41:55.085703  313923 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1025 09:41:55.085804  313923 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-900552 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-900552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 09:41:55.085885  313923 ssh_runner.go:195] Run: crio config
	I1025 09:41:55.158119  313923 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1025 09:41:55.158143  313923 cni.go:84] Creating CNI manager for ""
	I1025 09:41:55.158153  313923 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:41:55.158161  313923 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 09:41:55.158184  313923 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-900552 NodeName:functional-900552 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 09:41:55.158307  313923 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-900552"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 09:41:55.158377  313923 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 09:41:55.166587  313923 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 09:41:55.166691  313923 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 09:41:55.174328  313923 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1025 09:41:55.187099  313923 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 09:41:55.199477  313923 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1025 09:41:55.212133  313923 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1025 09:41:55.215781  313923 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:41:55.351921  313923 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:41:55.365740  313923 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/functional-900552 for IP: 192.168.49.2
	I1025 09:41:55.365750  313923 certs.go:195] generating shared ca certs ...
	I1025 09:41:55.365775  313923 certs.go:227] acquiring lock for ca certs: {Name:mk9438893ca511bbb3aa6154ee7b6f94b409696d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:41:55.365910  313923 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key
	I1025 09:41:55.365952  313923 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key
	I1025 09:41:55.365958  313923 certs.go:257] generating profile certs ...
	I1025 09:41:55.366034  313923 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/functional-900552/client.key
	I1025 09:41:55.366078  313923 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/functional-900552/apiserver.key.caf7f317
	I1025 09:41:55.366109  313923 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/functional-900552/proxy-client.key
	I1025 09:41:55.366229  313923 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem (1338 bytes)
	W1025 09:41:55.366253  313923 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017_empty.pem, impossibly tiny 0 bytes
	I1025 09:41:55.366260  313923 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 09:41:55.366283  313923 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem (1082 bytes)
	I1025 09:41:55.366303  313923 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem (1123 bytes)
	I1025 09:41:55.366332  313923 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem (1675 bytes)
	I1025 09:41:55.366372  313923 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 09:41:55.366940  313923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 09:41:55.385499  313923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 09:41:55.404518  313923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 09:41:55.423046  313923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 09:41:55.441597  313923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/functional-900552/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 09:41:55.459117  313923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/functional-900552/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 09:41:55.476654  313923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/functional-900552/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 09:41:55.497031  313923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/functional-900552/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 09:41:55.517406  313923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /usr/share/ca-certificates/2940172.pem (1708 bytes)
	I1025 09:41:55.536265  313923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 09:41:55.554773  313923 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem --> /usr/share/ca-certificates/294017.pem (1338 bytes)
	I1025 09:41:55.572906  313923 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 09:41:55.585684  313923 ssh_runner.go:195] Run: openssl version
	I1025 09:41:55.591933  313923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940172.pem && ln -fs /usr/share/ca-certificates/2940172.pem /etc/ssl/certs/2940172.pem"
	I1025 09:41:55.600474  313923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940172.pem
	I1025 09:41:55.604251  313923 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:39 /usr/share/ca-certificates/2940172.pem
	I1025 09:41:55.604306  313923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940172.pem
	I1025 09:41:55.647683  313923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940172.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 09:41:55.655523  313923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 09:41:55.663725  313923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:41:55.667509  313923 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:41:55.667575  313923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 09:41:55.709956  313923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 09:41:55.717875  313923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294017.pem && ln -fs /usr/share/ca-certificates/294017.pem /etc/ssl/certs/294017.pem"
	I1025 09:41:55.726078  313923 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294017.pem
	I1025 09:41:55.729873  313923 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:39 /usr/share/ca-certificates/294017.pem
	I1025 09:41:55.729927  313923 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294017.pem
	I1025 09:41:55.770942  313923 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294017.pem /etc/ssl/certs/51391683.0"
	I1025 09:41:55.779133  313923 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 09:41:55.782784  313923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 09:41:55.823682  313923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 09:41:55.864614  313923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 09:41:55.905501  313923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 09:41:55.947021  313923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 09:41:55.989109  313923 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 09:41:56.030204  313923 kubeadm.go:400] StartCluster: {Name:functional-900552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-900552 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:41:56.030296  313923 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 09:41:56.030362  313923 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:41:56.058698  313923 cri.go:89] found id: "7c7a6c3c8080b764fccd74a8ee58fab027d2c3513906fd18ab3638d14c9bd6e9"
	I1025 09:41:56.058709  313923 cri.go:89] found id: "8d90be83263651f189726e0eef82fad5cfdca1e0c1d7f7ca63681ce011a47954"
	I1025 09:41:56.058712  313923 cri.go:89] found id: "056fec35ce7b095d969685390970a2d38f4e9d2f847d4af1cd33e0bfb07f8dab"
	I1025 09:41:56.058715  313923 cri.go:89] found id: "7015b551ded76028c75ed198921917e4a9b787f5aeb48fc01cd64fed616b209e"
	I1025 09:41:56.058717  313923 cri.go:89] found id: "93d657fb4dc86f35464f12adb8ac86693e9e6e8edb1bbc68bf6ada461b17e83a"
	I1025 09:41:56.058721  313923 cri.go:89] found id: "d074e67e65088cb0fcfd238f6c75480732f62832c11a38c11a408040588a9651"
	I1025 09:41:56.058723  313923 cri.go:89] found id: "6ab99e50dba1296d1c23d587abfccfc5f4f4d929aa66db3ae9c19c380ec851a2"
	I1025 09:41:56.058725  313923 cri.go:89] found id: "b537cfd3a6826d3a9d475de92011c66a451a69dc15936a77ff4f4adf74fea1e7"
	I1025 09:41:56.058727  313923 cri.go:89] found id: "b7ed343b1789389d5d1021ecb77041df90de8d522f064869fbde923b82446b71"
	I1025 09:41:56.058733  313923 cri.go:89] found id: "21802cfe92f129f77f75288c7584e661302967c5b301b7522161194755a30884"
	I1025 09:41:56.058735  313923 cri.go:89] found id: "34356b68205569f9a6f9ee5f4a38ffff7a29cdfbe88406a95282bdc080b286b7"
	I1025 09:41:56.058737  313923 cri.go:89] found id: "44804466b651dd47045fdf8f43cc93336ce6cca14b0699b87010cb64601e170e"
	I1025 09:41:56.058740  313923 cri.go:89] found id: "a9559a6760f227d447a8a113485bd6133e2841573ee4d272fca813230ca7119e"
	I1025 09:41:56.058742  313923 cri.go:89] found id: "ad21cacba1bf6ffc05c49391e4a8068ec37b06a044460d3535f9144b93f627e6"
	I1025 09:41:56.058744  313923 cri.go:89] found id: "6281dc14cee4515daba84864929f2f166eb0e2be22acc70a9e0cbfe53dd6f4b6"
	I1025 09:41:56.058747  313923 cri.go:89] found id: "eea3f94e2d39b4b0ecb7f39794221ccc282191a4549309f7873ce7ef5aff26b5"
	I1025 09:41:56.058750  313923 cri.go:89] found id: ""
	I1025 09:41:56.058798  313923 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 09:41:56.070736  313923 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:41:56Z" level=error msg="open /run/runc: no such file or directory"
	I1025 09:41:56.070824  313923 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 09:41:56.079453  313923 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 09:41:56.079463  313923 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 09:41:56.079513  313923 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 09:41:56.087200  313923 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:41:56.087715  313923 kubeconfig.go:125] found "functional-900552" server: "https://192.168.49.2:8441"
	I1025 09:41:56.089419  313923 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 09:41:56.100866  313923 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-25 09:39:58.251229235 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-25 09:41:55.204678656 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1025 09:41:56.100877  313923 kubeadm.go:1160] stopping kube-system containers ...
	I1025 09:41:56.100889  313923 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1025 09:41:56.100950  313923 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 09:41:56.130898  313923 cri.go:89] found id: "7c7a6c3c8080b764fccd74a8ee58fab027d2c3513906fd18ab3638d14c9bd6e9"
	I1025 09:41:56.130910  313923 cri.go:89] found id: "8d90be83263651f189726e0eef82fad5cfdca1e0c1d7f7ca63681ce011a47954"
	I1025 09:41:56.130913  313923 cri.go:89] found id: "056fec35ce7b095d969685390970a2d38f4e9d2f847d4af1cd33e0bfb07f8dab"
	I1025 09:41:56.130916  313923 cri.go:89] found id: "7015b551ded76028c75ed198921917e4a9b787f5aeb48fc01cd64fed616b209e"
	I1025 09:41:56.130918  313923 cri.go:89] found id: "93d657fb4dc86f35464f12adb8ac86693e9e6e8edb1bbc68bf6ada461b17e83a"
	I1025 09:41:56.130922  313923 cri.go:89] found id: "d074e67e65088cb0fcfd238f6c75480732f62832c11a38c11a408040588a9651"
	I1025 09:41:56.130924  313923 cri.go:89] found id: "6ab99e50dba1296d1c23d587abfccfc5f4f4d929aa66db3ae9c19c380ec851a2"
	I1025 09:41:56.130926  313923 cri.go:89] found id: "b537cfd3a6826d3a9d475de92011c66a451a69dc15936a77ff4f4adf74fea1e7"
	I1025 09:41:56.130928  313923 cri.go:89] found id: "b7ed343b1789389d5d1021ecb77041df90de8d522f064869fbde923b82446b71"
	I1025 09:41:56.130936  313923 cri.go:89] found id: "21802cfe92f129f77f75288c7584e661302967c5b301b7522161194755a30884"
	I1025 09:41:56.130938  313923 cri.go:89] found id: "34356b68205569f9a6f9ee5f4a38ffff7a29cdfbe88406a95282bdc080b286b7"
	I1025 09:41:56.130940  313923 cri.go:89] found id: "44804466b651dd47045fdf8f43cc93336ce6cca14b0699b87010cb64601e170e"
	I1025 09:41:56.130942  313923 cri.go:89] found id: "a9559a6760f227d447a8a113485bd6133e2841573ee4d272fca813230ca7119e"
	I1025 09:41:56.130945  313923 cri.go:89] found id: "ad21cacba1bf6ffc05c49391e4a8068ec37b06a044460d3535f9144b93f627e6"
	I1025 09:41:56.130946  313923 cri.go:89] found id: "6281dc14cee4515daba84864929f2f166eb0e2be22acc70a9e0cbfe53dd6f4b6"
	I1025 09:41:56.130951  313923 cri.go:89] found id: "eea3f94e2d39b4b0ecb7f39794221ccc282191a4549309f7873ce7ef5aff26b5"
	I1025 09:41:56.130953  313923 cri.go:89] found id: ""
	I1025 09:41:56.130958  313923 cri.go:252] Stopping containers: [7c7a6c3c8080b764fccd74a8ee58fab027d2c3513906fd18ab3638d14c9bd6e9 8d90be83263651f189726e0eef82fad5cfdca1e0c1d7f7ca63681ce011a47954 056fec35ce7b095d969685390970a2d38f4e9d2f847d4af1cd33e0bfb07f8dab 7015b551ded76028c75ed198921917e4a9b787f5aeb48fc01cd64fed616b209e 93d657fb4dc86f35464f12adb8ac86693e9e6e8edb1bbc68bf6ada461b17e83a d074e67e65088cb0fcfd238f6c75480732f62832c11a38c11a408040588a9651 6ab99e50dba1296d1c23d587abfccfc5f4f4d929aa66db3ae9c19c380ec851a2 b537cfd3a6826d3a9d475de92011c66a451a69dc15936a77ff4f4adf74fea1e7 b7ed343b1789389d5d1021ecb77041df90de8d522f064869fbde923b82446b71 21802cfe92f129f77f75288c7584e661302967c5b301b7522161194755a30884 34356b68205569f9a6f9ee5f4a38ffff7a29cdfbe88406a95282bdc080b286b7 44804466b651dd47045fdf8f43cc93336ce6cca14b0699b87010cb64601e170e a9559a6760f227d447a8a113485bd6133e2841573ee4d272fca813230ca7119e ad21cacba1bf6ffc05c49391e4a8068ec37b06a044460d3535f9144b93f627e6 6281dc14cee4515daba84864929f2f166eb0e2be2
2acc70a9e0cbfe53dd6f4b6 eea3f94e2d39b4b0ecb7f39794221ccc282191a4549309f7873ce7ef5aff26b5]
	I1025 09:41:56.131020  313923 ssh_runner.go:195] Run: which crictl
	I1025 09:41:56.134948  313923 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 7c7a6c3c8080b764fccd74a8ee58fab027d2c3513906fd18ab3638d14c9bd6e9 8d90be83263651f189726e0eef82fad5cfdca1e0c1d7f7ca63681ce011a47954 056fec35ce7b095d969685390970a2d38f4e9d2f847d4af1cd33e0bfb07f8dab 7015b551ded76028c75ed198921917e4a9b787f5aeb48fc01cd64fed616b209e 93d657fb4dc86f35464f12adb8ac86693e9e6e8edb1bbc68bf6ada461b17e83a d074e67e65088cb0fcfd238f6c75480732f62832c11a38c11a408040588a9651 6ab99e50dba1296d1c23d587abfccfc5f4f4d929aa66db3ae9c19c380ec851a2 b537cfd3a6826d3a9d475de92011c66a451a69dc15936a77ff4f4adf74fea1e7 b7ed343b1789389d5d1021ecb77041df90de8d522f064869fbde923b82446b71 21802cfe92f129f77f75288c7584e661302967c5b301b7522161194755a30884 34356b68205569f9a6f9ee5f4a38ffff7a29cdfbe88406a95282bdc080b286b7 44804466b651dd47045fdf8f43cc93336ce6cca14b0699b87010cb64601e170e a9559a6760f227d447a8a113485bd6133e2841573ee4d272fca813230ca7119e ad21cacba1bf6ffc05c49391e4a8068ec37b06a044460d3535f9144b93f627e6 6281dc
14cee4515daba84864929f2f166eb0e2be22acc70a9e0cbfe53dd6f4b6 eea3f94e2d39b4b0ecb7f39794221ccc282191a4549309f7873ce7ef5aff26b5
	I1025 09:41:56.234461  313923 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1025 09:41:56.346215  313923 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 09:41:56.354601  313923 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5635 Oct 25 09:40 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5640 Oct 25 09:40 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Oct 25 09:40 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Oct 25 09:40 /etc/kubernetes/scheduler.conf
	
	I1025 09:41:56.354666  313923 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1025 09:41:56.362987  313923 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1025 09:41:56.370985  313923 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:41:56.371040  313923 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 09:41:56.378800  313923 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1025 09:41:56.387019  313923 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:41:56.387077  313923 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 09:41:56.394543  313923 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1025 09:41:56.402520  313923 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1025 09:41:56.402580  313923 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 09:41:56.410589  313923 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 09:41:56.418528  313923 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 09:41:56.471069  313923 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 09:41:59.468692  313923 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.997598623s)
	I1025 09:41:59.468750  313923 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1025 09:41:59.692175  313923 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 09:41:59.760596  313923 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1025 09:41:59.824570  313923 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:41:59.824636  313923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:42:00.325717  313923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:42:00.824765  313923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:42:00.841200  313923 api_server.go:72] duration metric: took 1.01663007s to wait for apiserver process to appear ...
	I1025 09:42:00.841216  313923 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:42:00.841234  313923 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1025 09:42:04.303652  313923 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1025 09:42:04.303671  313923 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 09:42:04.303682  313923 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1025 09:42:04.560305  313923 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1025 09:42:04.560322  313923 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 09:42:04.560333  313923 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1025 09:42:04.574401  313923 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1025 09:42:04.574416  313923 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1025 09:42:04.841828  313923 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1025 09:42:04.866426  313923 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:42:04.866443  313923 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:42:05.342067  313923 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1025 09:42:05.350194  313923 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 09:42:05.350218  313923 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 09:42:05.841995  313923 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1025 09:42:05.853777  313923 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1025 09:42:05.873196  313923 api_server.go:141] control plane version: v1.34.1
	I1025 09:42:05.873211  313923 api_server.go:131] duration metric: took 5.031991045s to wait for apiserver health ...
	I1025 09:42:05.873219  313923 cni.go:84] Creating CNI manager for ""
	I1025 09:42:05.873251  313923 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:42:05.876433  313923 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 09:42:05.879285  313923 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 09:42:05.883599  313923 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 09:42:05.883628  313923 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 09:42:05.896794  313923 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 09:42:06.374914  313923 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:42:06.378213  313923 system_pods.go:59] 8 kube-system pods found
	I1025 09:42:06.378233  313923 system_pods.go:61] "coredns-66bc5c9577-hdntf" [f1f90078-1b81-4a86-a4f6-ef3016b2fc53] Running
	I1025 09:42:06.378242  313923 system_pods.go:61] "etcd-functional-900552" [5bdd999f-d36f-45a4-ad83-d0caf3f1a304] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:42:06.378247  313923 system_pods.go:61] "kindnet-jvghm" [cfba3e24-cea9-4ac9-9603-e73ef3eb8267] Running
	I1025 09:42:06.378254  313923 system_pods.go:61] "kube-apiserver-functional-900552" [790cbe2c-0ecf-44f4-abc5-7e8b395be1b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:42:06.378261  313923 system_pods.go:61] "kube-controller-manager-functional-900552" [a5ac981e-de86-438a-8465-13efc129b12f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:42:06.378266  313923 system_pods.go:61] "kube-proxy-w94xc" [64a66cc6-014e-434a-b4b0-6b53ea3ef15f] Running
	I1025 09:42:06.378272  313923 system_pods.go:61] "kube-scheduler-functional-900552" [0ab587b0-2cac-483c-be40-4be0b286214e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:42:06.378275  313923 system_pods.go:61] "storage-provisioner" [f04b08cb-4093-4245-9f5f-4f696215a979] Running
	I1025 09:42:06.378280  313923 system_pods.go:74] duration metric: took 3.355618ms to wait for pod list to return data ...
	I1025 09:42:06.378286  313923 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:42:06.381190  313923 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 09:42:06.381212  313923 node_conditions.go:123] node cpu capacity is 2
	I1025 09:42:06.381221  313923 node_conditions.go:105] duration metric: took 2.93181ms to run NodePressure ...
	I1025 09:42:06.381282  313923 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1025 09:42:06.635949  313923 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1025 09:42:06.642067  313923 kubeadm.go:743] kubelet initialised
	I1025 09:42:06.642088  313923 kubeadm.go:744] duration metric: took 6.116391ms waiting for restarted kubelet to initialise ...
	I1025 09:42:06.642103  313923 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 09:42:06.651122  313923 ops.go:34] apiserver oom_adj: -16
	I1025 09:42:06.651133  313923 kubeadm.go:601] duration metric: took 10.571664842s to restartPrimaryControlPlane
	I1025 09:42:06.651141  313923 kubeadm.go:402] duration metric: took 10.620947497s to StartCluster
	I1025 09:42:06.651175  313923 settings.go:142] acquiring lock: {Name:mkea67a33832f2ea491a4c745ccd836174c61655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:42:06.651249  313923 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 09:42:06.651857  313923 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/kubeconfig: {Name:mk3dba620aff9c31a3afd43d9db4d9ff5be75367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:42:06.652058  313923 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 09:42:06.652394  313923 config.go:182] Loaded profile config "functional-900552": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:42:06.652413  313923 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 09:42:06.652516  313923 addons.go:69] Setting default-storageclass=true in profile "functional-900552"
	I1025 09:42:06.652527  313923 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-900552"
	I1025 09:42:06.652523  313923 addons.go:69] Setting storage-provisioner=true in profile "functional-900552"
	I1025 09:42:06.652539  313923 addons.go:238] Setting addon storage-provisioner=true in "functional-900552"
	W1025 09:42:06.652545  313923 addons.go:247] addon storage-provisioner should already be in state true
	I1025 09:42:06.652568  313923 host.go:66] Checking if "functional-900552" exists ...
	I1025 09:42:06.652788  313923 cli_runner.go:164] Run: docker container inspect functional-900552 --format={{.State.Status}}
	I1025 09:42:06.652983  313923 cli_runner.go:164] Run: docker container inspect functional-900552 --format={{.State.Status}}
	I1025 09:42:06.660658  313923 out.go:179] * Verifying Kubernetes components...
	I1025 09:42:06.663852  313923 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 09:42:06.689033  313923 addons.go:238] Setting addon default-storageclass=true in "functional-900552"
	W1025 09:42:06.689043  313923 addons.go:247] addon default-storageclass should already be in state true
	I1025 09:42:06.689064  313923 host.go:66] Checking if "functional-900552" exists ...
	I1025 09:42:06.689571  313923 cli_runner.go:164] Run: docker container inspect functional-900552 --format={{.State.Status}}
	I1025 09:42:06.691302  313923 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 09:42:06.694429  313923 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:42:06.694439  313923 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 09:42:06.694499  313923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-900552
	I1025 09:42:06.711205  313923 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 09:42:06.711217  313923 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 09:42:06.711281  313923 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-900552
	I1025 09:42:06.743933  313923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/functional-900552/id_rsa Username:docker}
	I1025 09:42:06.750511  313923 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/functional-900552/id_rsa Username:docker}
	I1025 09:42:06.879526  313923 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 09:42:06.893362  313923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 09:42:06.919499  313923 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 09:42:06.925338  313923 node_ready.go:35] waiting up to 6m0s for node "functional-900552" to be "Ready" ...
	I1025 09:42:06.930761  313923 node_ready.go:49] node "functional-900552" is "Ready"
	I1025 09:42:06.930778  313923 node_ready.go:38] duration metric: took 5.419816ms for node "functional-900552" to be "Ready" ...
	I1025 09:42:06.930790  313923 api_server.go:52] waiting for apiserver process to appear ...
	I1025 09:42:06.930858  313923 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:42:07.664950  313923 api_server.go:72] duration metric: took 1.012867613s to wait for apiserver process to appear ...
	I1025 09:42:07.664963  313923 api_server.go:88] waiting for apiserver healthz status ...
	I1025 09:42:07.664980  313923 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1025 09:42:07.668233  313923 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1025 09:42:07.671110  313923 addons.go:514] duration metric: took 1.018682346s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1025 09:42:07.675489  313923 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1025 09:42:07.676427  313923 api_server.go:141] control plane version: v1.34.1
	I1025 09:42:07.676438  313923 api_server.go:131] duration metric: took 11.469933ms to wait for apiserver health ...
	I1025 09:42:07.676446  313923 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 09:42:07.679665  313923 system_pods.go:59] 8 kube-system pods found
	I1025 09:42:07.679678  313923 system_pods.go:61] "coredns-66bc5c9577-hdntf" [f1f90078-1b81-4a86-a4f6-ef3016b2fc53] Running
	I1025 09:42:07.679686  313923 system_pods.go:61] "etcd-functional-900552" [5bdd999f-d36f-45a4-ad83-d0caf3f1a304] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:42:07.679692  313923 system_pods.go:61] "kindnet-jvghm" [cfba3e24-cea9-4ac9-9603-e73ef3eb8267] Running
	I1025 09:42:07.679699  313923 system_pods.go:61] "kube-apiserver-functional-900552" [790cbe2c-0ecf-44f4-abc5-7e8b395be1b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:42:07.679705  313923 system_pods.go:61] "kube-controller-manager-functional-900552" [a5ac981e-de86-438a-8465-13efc129b12f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:42:07.679709  313923 system_pods.go:61] "kube-proxy-w94xc" [64a66cc6-014e-434a-b4b0-6b53ea3ef15f] Running
	I1025 09:42:07.679714  313923 system_pods.go:61] "kube-scheduler-functional-900552" [0ab587b0-2cac-483c-be40-4be0b286214e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:42:07.679717  313923 system_pods.go:61] "storage-provisioner" [f04b08cb-4093-4245-9f5f-4f696215a979] Running
	I1025 09:42:07.679722  313923 system_pods.go:74] duration metric: took 3.271556ms to wait for pod list to return data ...
	I1025 09:42:07.679728  313923 default_sa.go:34] waiting for default service account to be created ...
	I1025 09:42:07.682687  313923 default_sa.go:45] found service account: "default"
	I1025 09:42:07.682699  313923 default_sa.go:55] duration metric: took 2.967076ms for default service account to be created ...
	I1025 09:42:07.682707  313923 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 09:42:07.686083  313923 system_pods.go:86] 8 kube-system pods found
	I1025 09:42:07.686096  313923 system_pods.go:89] "coredns-66bc5c9577-hdntf" [f1f90078-1b81-4a86-a4f6-ef3016b2fc53] Running
	I1025 09:42:07.686104  313923 system_pods.go:89] "etcd-functional-900552" [5bdd999f-d36f-45a4-ad83-d0caf3f1a304] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 09:42:07.686108  313923 system_pods.go:89] "kindnet-jvghm" [cfba3e24-cea9-4ac9-9603-e73ef3eb8267] Running
	I1025 09:42:07.686115  313923 system_pods.go:89] "kube-apiserver-functional-900552" [790cbe2c-0ecf-44f4-abc5-7e8b395be1b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 09:42:07.686121  313923 system_pods.go:89] "kube-controller-manager-functional-900552" [a5ac981e-de86-438a-8465-13efc129b12f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 09:42:07.686125  313923 system_pods.go:89] "kube-proxy-w94xc" [64a66cc6-014e-434a-b4b0-6b53ea3ef15f] Running
	I1025 09:42:07.686130  313923 system_pods.go:89] "kube-scheduler-functional-900552" [0ab587b0-2cac-483c-be40-4be0b286214e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 09:42:07.686133  313923 system_pods.go:89] "storage-provisioner" [f04b08cb-4093-4245-9f5f-4f696215a979] Running
	I1025 09:42:07.686138  313923 system_pods.go:126] duration metric: took 3.427357ms to wait for k8s-apps to be running ...
	I1025 09:42:07.686144  313923 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 09:42:07.686200  313923 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:42:07.699323  313923 system_svc.go:56] duration metric: took 13.169663ms WaitForService to wait for kubelet
	I1025 09:42:07.699341  313923 kubeadm.go:586] duration metric: took 1.047262815s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 09:42:07.699359  313923 node_conditions.go:102] verifying NodePressure condition ...
	I1025 09:42:07.702354  313923 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 09:42:07.702368  313923 node_conditions.go:123] node cpu capacity is 2
	I1025 09:42:07.702377  313923 node_conditions.go:105] duration metric: took 3.01392ms to run NodePressure ...
	I1025 09:42:07.702387  313923 start.go:241] waiting for startup goroutines ...
	I1025 09:42:07.702393  313923 start.go:246] waiting for cluster config update ...
	I1025 09:42:07.702403  313923 start.go:255] writing updated cluster config ...
	I1025 09:42:07.702684  313923 ssh_runner.go:195] Run: rm -f paused
	I1025 09:42:07.706015  313923 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:42:07.709301  313923 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hdntf" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:42:07.713696  313923 pod_ready.go:94] pod "coredns-66bc5c9577-hdntf" is "Ready"
	I1025 09:42:07.713710  313923 pod_ready.go:86] duration metric: took 4.397592ms for pod "coredns-66bc5c9577-hdntf" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:42:07.715933  313923 pod_ready.go:83] waiting for pod "etcd-functional-900552" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 09:42:09.721304  313923 pod_ready.go:104] pod "etcd-functional-900552" is not "Ready", error: <nil>
	W1025 09:42:11.721339  313923 pod_ready.go:104] pod "etcd-functional-900552" is not "Ready", error: <nil>
	I1025 09:42:12.721840  313923 pod_ready.go:94] pod "etcd-functional-900552" is "Ready"
	I1025 09:42:12.721854  313923 pod_ready.go:86] duration metric: took 5.005911553s for pod "etcd-functional-900552" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:42:12.724477  313923 pod_ready.go:83] waiting for pod "kube-apiserver-functional-900552" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:42:12.729567  313923 pod_ready.go:94] pod "kube-apiserver-functional-900552" is "Ready"
	I1025 09:42:12.729582  313923 pod_ready.go:86] duration metric: took 5.091238ms for pod "kube-apiserver-functional-900552" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:42:12.732406  313923 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-900552" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:42:12.736947  313923 pod_ready.go:94] pod "kube-controller-manager-functional-900552" is "Ready"
	I1025 09:42:12.736961  313923 pod_ready.go:86] duration metric: took 4.542127ms for pod "kube-controller-manager-functional-900552" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:42:12.739026  313923 pod_ready.go:83] waiting for pod "kube-proxy-w94xc" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:42:12.920468  313923 pod_ready.go:94] pod "kube-proxy-w94xc" is "Ready"
	I1025 09:42:12.920484  313923 pod_ready.go:86] duration metric: took 181.446852ms for pod "kube-proxy-w94xc" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:42:13.120482  313923 pod_ready.go:83] waiting for pod "kube-scheduler-functional-900552" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:42:14.319592  313923 pod_ready.go:94] pod "kube-scheduler-functional-900552" is "Ready"
	I1025 09:42:14.319605  313923 pod_ready.go:86] duration metric: took 1.199110521s for pod "kube-scheduler-functional-900552" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 09:42:14.319615  313923 pod_ready.go:40] duration metric: took 6.613580936s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 09:42:14.375668  313923 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 09:42:14.378649  313923 out.go:179] * Done! kubectl is now configured to use "functional-900552" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 09:42:50 functional-900552 crio[3502]: time="2025-10-25T09:42:50.946989093Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-6vndh Namespace:default ID:fb5bcc1ae0b17a1a0d95f8114be32013bc605c1d2b83a8000b4831a8b3b19f95 UID:56d9e6fa-51a9-4e33-9ff2-0e93e94ef697 NetNS:/var/run/netns/907c4eff-b5aa-437b-b27f-3390b524f4d7 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079648}] Aliases:map[]}"
	Oct 25 09:42:50 functional-900552 crio[3502]: time="2025-10-25T09:42:50.947425742Z" level=info msg="Checking pod default_hello-node-75c85bcc94-6vndh for CNI network kindnet (type=ptp)"
	Oct 25 09:42:50 functional-900552 crio[3502]: time="2025-10-25T09:42:50.951312682Z" level=info msg="Ran pod sandbox fb5bcc1ae0b17a1a0d95f8114be32013bc605c1d2b83a8000b4831a8b3b19f95 with infra container: default/hello-node-75c85bcc94-6vndh/POD" id=9c256c82-6681-414a-b208-1b02cb85238c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 09:42:50 functional-900552 crio[3502]: time="2025-10-25T09:42:50.952533234Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=33876b7d-9b55-447d-8bff-b5a32d25cbcb name=/runtime.v1.ImageService/PullImage
	Oct 25 09:43:00 functional-900552 crio[3502]: time="2025-10-25T09:43:00.005718067Z" level=info msg="Stopping pod sandbox: 40eaa3841203d57560e4b8f1e9c2380febcc3beed02587922733ea6b16e78775" id=ade6521a-bcf7-4059-805f-34775ee6996e name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 09:43:00 functional-900552 crio[3502]: time="2025-10-25T09:43:00.005794679Z" level=info msg="Stopped pod sandbox (already stopped): 40eaa3841203d57560e4b8f1e9c2380febcc3beed02587922733ea6b16e78775" id=ade6521a-bcf7-4059-805f-34775ee6996e name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 09:43:00 functional-900552 crio[3502]: time="2025-10-25T09:43:00.007269873Z" level=info msg="Removing pod sandbox: 40eaa3841203d57560e4b8f1e9c2380febcc3beed02587922733ea6b16e78775" id=010ae696-22e8-4015-bf67-ee374f93c434 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 09:43:00 functional-900552 crio[3502]: time="2025-10-25T09:43:00.012278114Z" level=info msg="Removed pod sandbox: 40eaa3841203d57560e4b8f1e9c2380febcc3beed02587922733ea6b16e78775" id=010ae696-22e8-4015-bf67-ee374f93c434 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 09:43:00 functional-900552 crio[3502]: time="2025-10-25T09:43:00.039066311Z" level=info msg="Stopping pod sandbox: 46949221f01dcd2fac9589e3ea251fccc1bf257268bfc0c6e6c51b9fd3d021a7" id=ed18234c-1495-4db5-ad6e-a2f5dc819e88 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 09:43:00 functional-900552 crio[3502]: time="2025-10-25T09:43:00.039143219Z" level=info msg="Stopped pod sandbox (already stopped): 46949221f01dcd2fac9589e3ea251fccc1bf257268bfc0c6e6c51b9fd3d021a7" id=ed18234c-1495-4db5-ad6e-a2f5dc819e88 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 09:43:00 functional-900552 crio[3502]: time="2025-10-25T09:43:00.04048086Z" level=info msg="Removing pod sandbox: 46949221f01dcd2fac9589e3ea251fccc1bf257268bfc0c6e6c51b9fd3d021a7" id=91e93e82-44a1-4eff-afdc-6eddf05397ef name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 09:43:00 functional-900552 crio[3502]: time="2025-10-25T09:43:00.044945201Z" level=info msg="Removed pod sandbox: 46949221f01dcd2fac9589e3ea251fccc1bf257268bfc0c6e6c51b9fd3d021a7" id=91e93e82-44a1-4eff-afdc-6eddf05397ef name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 09:43:00 functional-900552 crio[3502]: time="2025-10-25T09:43:00.086519686Z" level=info msg="Stopping pod sandbox: 78d84ca6c5537e6772d6ee853636e6808d4d07f1ad7d5fb51bf2a1fb710e4b3c" id=55e8c153-2812-47a8-b608-fdb11763d181 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 09:43:00 functional-900552 crio[3502]: time="2025-10-25T09:43:00.086590629Z" level=info msg="Stopped pod sandbox (already stopped): 78d84ca6c5537e6772d6ee853636e6808d4d07f1ad7d5fb51bf2a1fb710e4b3c" id=55e8c153-2812-47a8-b608-fdb11763d181 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 25 09:43:00 functional-900552 crio[3502]: time="2025-10-25T09:43:00.087200441Z" level=info msg="Removing pod sandbox: 78d84ca6c5537e6772d6ee853636e6808d4d07f1ad7d5fb51bf2a1fb710e4b3c" id=c50f236b-adaa-4614-bbc1-af34bb13367b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 09:43:00 functional-900552 crio[3502]: time="2025-10-25T09:43:00.102045995Z" level=info msg="Removed pod sandbox: 78d84ca6c5537e6772d6ee853636e6808d4d07f1ad7d5fb51bf2a1fb710e4b3c" id=c50f236b-adaa-4614-bbc1-af34bb13367b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 25 09:43:01 functional-900552 crio[3502]: time="2025-10-25T09:43:01.85310105Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=5223fd7e-f265-4b27-8671-d747eb17d5de name=/runtime.v1.ImageService/PullImage
	Oct 25 09:43:10 functional-900552 crio[3502]: time="2025-10-25T09:43:10.853717489Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=a829840d-d21e-41b1-a415-7b7b6862d4c3 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:43:29 functional-900552 crio[3502]: time="2025-10-25T09:43:29.854881943Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=18365ac3-c975-45b9-be7c-339a505cf204 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:44:01 functional-900552 crio[3502]: time="2025-10-25T09:44:01.85390953Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f3adb155-69da-4191-8637-2bcf9d534c71 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:44:10 functional-900552 crio[3502]: time="2025-10-25T09:44:10.853371594Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=efc63bc5-a4f4-4e46-af6a-958c8ec677fe name=/runtime.v1.ImageService/PullImage
	Oct 25 09:45:31 functional-900552 crio[3502]: time="2025-10-25T09:45:31.855338054Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=45fee36a-d421-4e2b-afe7-74976030966e name=/runtime.v1.ImageService/PullImage
	Oct 25 09:45:34 functional-900552 crio[3502]: time="2025-10-25T09:45:34.853554182Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=68268c9b-a2d7-4cfe-a526-5a0c527cff03 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:48:18 functional-900552 crio[3502]: time="2025-10-25T09:48:18.853140029Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=26803e4f-801e-4124-addf-2b450a005a57 name=/runtime.v1.ImageService/PullImage
	Oct 25 09:48:20 functional-900552 crio[3502]: time="2025-10-25T09:48:20.853266493Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=7a4e2cfd-edfb-4f1a-99c1-02aceb4e6d0e name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	cfbcbc279ef1a       docker.io/library/nginx@sha256:68e62e210589c349f01d82308b45fbd6fb9b855f8b12cb27e11ad48dbfd0e43f   9 minutes ago       Running             myfrontend                0                   5360039c32f6a       sp-pod                                      default
	b71cf9f7ba06b       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0   10 minutes ago      Running             nginx                     0                   339949859bd7a       nginx-svc                                   default
	056e109aa572e       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               2                   a0f70d0f043a2       kindnet-jvghm                               kube-system
	334bb1b8bd4f7       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   2                   230ffbc7a9182       coredns-66bc5c9577-hdntf                    kube-system
	e1b8b939afd71       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Running             kube-proxy                2                   fca050ee833ae       kube-proxy-w94xc                            kube-system
	c8dd98cb48df2       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       2                   abd20c866d155       storage-provisioner                         kube-system
	6bf262f92bc89       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            0                   8a8cb009271f8       kube-apiserver-functional-900552            kube-system
	4ed26d35429c7       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   2                   4dddeda3e32c4       kube-controller-manager-functional-900552   kube-system
	2fb0b1df0c319       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Running             kube-scheduler            2                   62a6539980598       kube-scheduler-functional-900552            kube-system
	6d15c7182b939       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      2                   40e3f67129aeb       etcd-functional-900552                      kube-system
	7c7a6c3c8080b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  11 minutes ago      Exited              kube-controller-manager   1                   4dddeda3e32c4       kube-controller-manager-functional-900552   kube-system
	8d90be8326365       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   1                   230ffbc7a9182       coredns-66bc5c9577-hdntf                    kube-system
	056fec35ce7b0       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  11 minutes ago      Exited              kube-proxy                1                   fca050ee833ae       kube-proxy-w94xc                            kube-system
	7015b551ded76       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  11 minutes ago      Exited              etcd                      1                   40e3f67129aeb       etcd-functional-900552                      kube-system
	93d657fb4dc86       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  11 minutes ago      Exited              storage-provisioner       1                   abd20c866d155       storage-provisioner                         kube-system
	d074e67e65088       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Exited              kindnet-cni               1                   a0f70d0f043a2       kindnet-jvghm                               kube-system
	6ab99e50dba12       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  11 minutes ago      Exited              kube-scheduler            1                   62a6539980598       kube-scheduler-functional-900552            kube-system
	
	
	==> coredns [334bb1b8bd4f7c2c4b5f9a772e8c65b4c1e2727fa8834c7c5c401b9e44d6d662] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59900 - 34157 "HINFO IN 386072091109156214.7703777596592278958. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013138819s
	
	
	==> coredns [8d90be83263651f189726e0eef82fad5cfdca1e0c1d7f7ca63681ce011a47954] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60201 - 43124 "HINFO IN 5597758441980023524.4144210279349302231. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019930953s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-900552
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-900552
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=functional-900552
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T09_40_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 09:40:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-900552
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 09:52:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 09:52:07 +0000   Sat, 25 Oct 2025 09:40:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 09:52:07 +0000   Sat, 25 Oct 2025 09:40:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 09:52:07 +0000   Sat, 25 Oct 2025 09:40:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 09:52:07 +0000   Sat, 25 Oct 2025 09:41:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-900552
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                fac8961e-709a-4aa2-ba02-0ed36d112c30
	  Boot ID:                    3729fb0b-b441-4078-a4f7-ae0fb40e9fa4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-6vndh                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m46s
	  default                     hello-node-connect-7d85dfc575-v99hn          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m54s
	  kube-system                 coredns-66bc5c9577-hdntf                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-900552                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-jvghm                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-900552             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-900552    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-w94xc                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-900552             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-900552 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-900552 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-900552 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-900552 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-900552 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-900552 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-900552 event: Registered Node functional-900552 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-900552 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-900552 event: Registered Node functional-900552 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-900552 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-900552 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-900552 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-900552 event: Registered Node functional-900552 in Controller
	
	
	==> dmesg <==
	[Oct25 08:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015587] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.503041] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.036759] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.769713] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.474162] kauditd_printk_skb: 36 callbacks suppressed
	[Oct25 08:29] hrtimer: interrupt took 30248914 ns
	[Oct25 09:08] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Oct25 09:31] kauditd_printk_skb: 8 callbacks suppressed
	[Oct25 09:33] overlayfs: idmapped layers are currently not supported
	[  +0.069522] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct25 09:39] overlayfs: idmapped layers are currently not supported
	[Oct25 09:40] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6d15c7182b93997e6ac865fe6f868b46727c67ab298eab053c4a95b4d094315a] <==
	{"level":"warn","ts":"2025-10-25T09:42:02.989949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:42:03.018087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:42:03.047331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:42:03.084715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:42:03.108199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:42:03.134723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:42:03.192011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:42:03.224541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:42:03.246471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:42:03.269798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:42:03.297277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:42:03.327400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:42:03.349350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:42:03.379408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:42:03.405588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:42:03.439018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:42:03.459396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:42:03.480645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:42:03.512118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:42:03.528100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:42:03.545337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:42:03.651853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44008","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T09:52:01.787505Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1126}
	{"level":"info","ts":"2025-10-25T09:52:01.811421Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1126,"took":"23.46591ms","hash":537146278,"current-db-size-bytes":3268608,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1458176,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-10-25T09:52:01.811477Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":537146278,"revision":1126,"compact-revision":-1}
	
	
	==> etcd [7015b551ded76028c75ed198921917e4a9b787f5aeb48fc01cd64fed616b209e] <==
	{"level":"warn","ts":"2025-10-25T09:41:16.095288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:41:16.114000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:41:16.132266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:41:16.172336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:41:16.212023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:41:16.220636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T09:41:16.264405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56972","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T09:41:41.787653Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-25T09:41:41.787704Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-900552","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-25T09:41:41.787815Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-25T09:41:41.939590Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-25T09:41:41.939692Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T09:41:41.939716Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-25T09:41:41.939830Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-25T09:41:41.939853Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-25T09:41:41.939884Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-25T09:41:41.939955Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-25T09:41:41.939991Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-25T09:41:41.940078Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-25T09:41:41.940115Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-25T09:41:41.940133Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T09:41:41.943850Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-25T09:41:41.943924Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T09:41:41.943956Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-25T09:41:41.943963Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-900552","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 09:52:36 up  1:35,  0 user,  load average: 0.34, 0.53, 1.56
	Linux functional-900552 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [056e109aa572e311f6a7d759646374e293f0c5cceff3dd7166549a279638799e] <==
	I1025 09:50:35.589479       1 main.go:301] handling current node
	I1025 09:50:45.584203       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:50:45.584251       1 main.go:301] handling current node
	I1025 09:50:55.584333       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:50:55.584444       1 main.go:301] handling current node
	I1025 09:51:05.591255       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:51:05.591363       1 main.go:301] handling current node
	I1025 09:51:15.588070       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:51:15.588107       1 main.go:301] handling current node
	I1025 09:51:25.584768       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:51:25.584802       1 main.go:301] handling current node
	I1025 09:51:35.584270       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:51:35.584305       1 main.go:301] handling current node
	I1025 09:51:45.592198       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:51:45.592234       1 main.go:301] handling current node
	I1025 09:51:55.587048       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:51:55.587088       1 main.go:301] handling current node
	I1025 09:52:05.590768       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:52:05.590895       1 main.go:301] handling current node
	I1025 09:52:15.587851       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:52:15.587889       1 main.go:301] handling current node
	I1025 09:52:25.591246       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:52:25.591359       1 main.go:301] handling current node
	I1025 09:52:35.591905       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:52:35.591937       1 main.go:301] handling current node
	
	
	==> kindnet [d074e67e65088cb0fcfd238f6c75480732f62832c11a38c11a408040588a9651] <==
	I1025 09:41:12.990171       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 09:41:12.990401       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1025 09:41:12.990582       1 main.go:148] setting mtu 1500 for CNI 
	I1025 09:41:12.990608       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 09:41:12.990619       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T09:41:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 09:41:13.357633       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 09:41:13.357732       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 09:41:13.357765       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 09:41:13.358138       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 09:41:17.661419       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 09:41:17.661526       1 metrics.go:72] Registering metrics
	I1025 09:41:17.661634       1 controller.go:711] "Syncing nftables rules"
	I1025 09:41:23.328708       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:41:23.328780       1 main.go:301] handling current node
	I1025 09:41:33.328589       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1025 09:41:33.328625       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6bf262f92bc89c7fbe22274c076f1a63d7d9d9633367a5090f92ced71807f9b8] <==
	I1025 09:42:04.617550       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 09:42:04.651524       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1025 09:42:04.651563       1 policy_source.go:240] refreshing policies
	I1025 09:42:04.660269       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 09:42:04.661396       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 09:42:04.661448       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 09:42:04.684138       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 09:42:04.688715       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 09:42:04.906913       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 09:42:05.306132       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 09:42:06.367452       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 09:42:06.512784       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 09:42:06.596986       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 09:42:06.608568       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 09:42:17.719795       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.153.245"}
	I1025 09:42:17.750367       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 09:42:17.751582       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1025 09:42:22.095517       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1025 09:42:24.320179       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.108.123.27"}
	I1025 09:42:33.870337       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 09:42:34.052024       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.108.215.61"}
	E1025 09:42:41.267907       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:55972: use of closed network connection
	E1025 09:42:50.515747       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:57650: use of closed network connection
	I1025 09:42:50.706742       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.104.191.116"}
	I1025 09:52:04.581069       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [4ed26d35429c7cb13d6f8dbf88e38d2654db9f0f484e320910d69678bd76be4d] <==
	I1025 09:42:07.870842       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 09:42:07.871291       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-900552"
	I1025 09:42:07.871377       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1025 09:42:07.871456       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 09:42:07.875190       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 09:42:07.875266       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 09:42:07.875280       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1025 09:42:07.875291       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 09:42:07.875799       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:42:07.876692       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 09:42:07.879209       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1025 09:42:07.880546       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:42:07.882920       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 09:42:07.886647       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 09:42:07.903560       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:42:07.903653       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:42:07.903684       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:42:07.907945       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 09:42:07.908840       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1025 09:42:07.908900       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1025 09:42:07.908927       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1025 09:42:07.908940       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1025 09:42:07.908946       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1025 09:42:07.913210       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 09:42:07.918875       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [7c7a6c3c8080b764fccd74a8ee58fab027d2c3513906fd18ab3638d14c9bd6e9] <==
	I1025 09:41:20.931466       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1025 09:41:20.931473       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1025 09:41:20.931199       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:41:20.937415       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 09:41:20.938691       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 09:41:20.939772       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 09:41:20.940960       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 09:41:20.944262       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 09:41:20.946558       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1025 09:41:20.947684       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 09:41:20.950729       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 09:41:20.963086       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 09:41:20.969238       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 09:41:20.972450       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 09:41:20.972525       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:41:20.972551       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 09:41:20.972560       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 09:41:20.972635       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 09:41:20.972932       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 09:41:20.973159       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 09:41:20.976476       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 09:41:20.985551       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 09:41:20.992913       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 09:41:20.997292       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 09:41:21.000762       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	
	
	==> kube-proxy [056fec35ce7b095d969685390970a2d38f4e9d2f847d4af1cd33e0bfb07f8dab] <==
	I1025 09:41:16.200254       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:41:17.105359       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:41:17.622398       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:41:17.632404       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1025 09:41:17.632502       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:41:18.089557       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:41:18.089730       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:41:18.095004       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:41:18.095687       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:41:18.095763       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:41:18.101937       1 config.go:200] "Starting service config controller"
	I1025 09:41:18.102058       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:41:18.102156       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:41:18.102218       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:41:18.102259       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:41:18.102301       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:41:18.102964       1 config.go:309] "Starting node config controller"
	I1025 09:41:18.103015       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:41:18.103046       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:41:18.211064       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:41:18.211552       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 09:41:18.324068       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [e1b8b939afd71ac3dd597849a6cfda7da8444886bd1e40b57e9ac6c66506f5b0] <==
	I1025 09:42:05.553280       1 server_linux.go:53] "Using iptables proxy"
	I1025 09:42:05.824950       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 09:42:05.927252       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 09:42:05.927287       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1025 09:42:05.927361       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 09:42:05.953991       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 09:42:05.954121       1 server_linux.go:132] "Using iptables Proxier"
	I1025 09:42:05.960466       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 09:42:05.960808       1 server.go:527] "Version info" version="v1.34.1"
	I1025 09:42:05.960986       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:42:05.962257       1 config.go:200] "Starting service config controller"
	I1025 09:42:05.962434       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 09:42:05.962484       1 config.go:106] "Starting endpoint slice config controller"
	I1025 09:42:05.962513       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 09:42:05.962547       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 09:42:05.962575       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 09:42:05.971793       1 config.go:309] "Starting node config controller"
	I1025 09:42:05.972903       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 09:42:05.972982       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 09:42:06.073179       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 09:42:06.073361       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 09:42:06.073395       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2fb0b1df0c319574b78dc6fd0310fb73c85652800ac1a2ee9ada1d86ec11c38c] <==
	I1025 09:42:03.796332       1 serving.go:386] Generated self-signed cert in-memory
	I1025 09:42:05.809156       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:42:05.810483       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:42:05.815937       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:42:05.816041       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1025 09:42:05.816068       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1025 09:42:05.816095       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 09:42:05.818315       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:42:05.818347       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:42:05.818366       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 09:42:05.818372       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 09:42:05.916600       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1025 09:42:05.918988       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 09:42:05.919057       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [6ab99e50dba1296d1c23d587abfccfc5f4f4d929aa66db3ae9c19c380ec851a2] <==
	I1025 09:41:17.893355       1 serving.go:386] Generated self-signed cert in-memory
	I1025 09:41:19.486201       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 09:41:19.486331       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 09:41:19.492466       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 09:41:19.492572       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1025 09:41:19.492602       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1025 09:41:19.492639       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 09:41:19.513873       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:41:19.513901       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:41:19.513929       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 09:41:19.513935       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 09:41:19.593719       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1025 09:41:19.614708       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 09:41:19.614783       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:41:41.783377       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1025 09:41:41.783409       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1025 09:41:41.783434       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1025 09:41:41.783486       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 09:41:41.783508       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 09:41:41.783525       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1025 09:41:41.783830       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1025 09:41:41.783855       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 25 09:49:55 functional-900552 kubelet[3820]: E1025 09:49:55.853343    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-v99hn" podUID="a515350f-e8e3-4b6c-b338-b6849cb96b9e"
	Oct 25 09:50:07 functional-900552 kubelet[3820]: E1025 09:50:07.854754    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-v99hn" podUID="a515350f-e8e3-4b6c-b338-b6849cb96b9e"
	Oct 25 09:50:08 functional-900552 kubelet[3820]: E1025 09:50:08.853006    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-6vndh" podUID="56d9e6fa-51a9-4e33-9ff2-0e93e94ef697"
	Oct 25 09:50:18 functional-900552 kubelet[3820]: E1025 09:50:18.852814    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-v99hn" podUID="a515350f-e8e3-4b6c-b338-b6849cb96b9e"
	Oct 25 09:50:22 functional-900552 kubelet[3820]: E1025 09:50:22.852720    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-6vndh" podUID="56d9e6fa-51a9-4e33-9ff2-0e93e94ef697"
	Oct 25 09:50:30 functional-900552 kubelet[3820]: E1025 09:50:30.853591    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-v99hn" podUID="a515350f-e8e3-4b6c-b338-b6849cb96b9e"
	Oct 25 09:50:33 functional-900552 kubelet[3820]: E1025 09:50:33.854760    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-6vndh" podUID="56d9e6fa-51a9-4e33-9ff2-0e93e94ef697"
	Oct 25 09:50:41 functional-900552 kubelet[3820]: E1025 09:50:41.859215    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-v99hn" podUID="a515350f-e8e3-4b6c-b338-b6849cb96b9e"
	Oct 25 09:50:46 functional-900552 kubelet[3820]: E1025 09:50:46.853395    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-6vndh" podUID="56d9e6fa-51a9-4e33-9ff2-0e93e94ef697"
	Oct 25 09:50:54 functional-900552 kubelet[3820]: E1025 09:50:54.852896    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-v99hn" podUID="a515350f-e8e3-4b6c-b338-b6849cb96b9e"
	Oct 25 09:50:59 functional-900552 kubelet[3820]: E1025 09:50:59.853609    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-6vndh" podUID="56d9e6fa-51a9-4e33-9ff2-0e93e94ef697"
	Oct 25 09:51:09 functional-900552 kubelet[3820]: E1025 09:51:09.854624    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-v99hn" podUID="a515350f-e8e3-4b6c-b338-b6849cb96b9e"
	Oct 25 09:51:13 functional-900552 kubelet[3820]: E1025 09:51:13.853551    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-6vndh" podUID="56d9e6fa-51a9-4e33-9ff2-0e93e94ef697"
	Oct 25 09:51:23 functional-900552 kubelet[3820]: E1025 09:51:23.853575    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-v99hn" podUID="a515350f-e8e3-4b6c-b338-b6849cb96b9e"
	Oct 25 09:51:27 functional-900552 kubelet[3820]: E1025 09:51:27.853061    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-6vndh" podUID="56d9e6fa-51a9-4e33-9ff2-0e93e94ef697"
	Oct 25 09:51:36 functional-900552 kubelet[3820]: E1025 09:51:36.853186    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-v99hn" podUID="a515350f-e8e3-4b6c-b338-b6849cb96b9e"
	Oct 25 09:51:39 functional-900552 kubelet[3820]: E1025 09:51:39.853996    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-6vndh" podUID="56d9e6fa-51a9-4e33-9ff2-0e93e94ef697"
	Oct 25 09:51:49 functional-900552 kubelet[3820]: E1025 09:51:49.854652    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-v99hn" podUID="a515350f-e8e3-4b6c-b338-b6849cb96b9e"
	Oct 25 09:51:50 functional-900552 kubelet[3820]: E1025 09:51:50.853305    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-6vndh" podUID="56d9e6fa-51a9-4e33-9ff2-0e93e94ef697"
	Oct 25 09:52:04 functional-900552 kubelet[3820]: E1025 09:52:04.852809    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-v99hn" podUID="a515350f-e8e3-4b6c-b338-b6849cb96b9e"
	Oct 25 09:52:04 functional-900552 kubelet[3820]: E1025 09:52:04.853424    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-6vndh" podUID="56d9e6fa-51a9-4e33-9ff2-0e93e94ef697"
	Oct 25 09:52:16 functional-900552 kubelet[3820]: E1025 09:52:16.852960    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-v99hn" podUID="a515350f-e8e3-4b6c-b338-b6849cb96b9e"
	Oct 25 09:52:17 functional-900552 kubelet[3820]: E1025 09:52:17.853333    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-6vndh" podUID="56d9e6fa-51a9-4e33-9ff2-0e93e94ef697"
	Oct 25 09:52:28 functional-900552 kubelet[3820]: E1025 09:52:28.853440    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-6vndh" podUID="56d9e6fa-51a9-4e33-9ff2-0e93e94ef697"
	Oct 25 09:52:31 functional-900552 kubelet[3820]: E1025 09:52:31.853263    3820 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-v99hn" podUID="a515350f-e8e3-4b6c-b338-b6849cb96b9e"
	
	
	==> storage-provisioner [93d657fb4dc86f35464f12adb8ac86693e9e6e8edb1bbc68bf6ada461b17e83a] <==
	I1025 09:41:13.104467       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 09:41:17.633424       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 09:41:17.633495       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 09:41:17.651337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:41:21.124131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:41:25.384199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:41:28.982777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:41:32.036491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:41:35.058830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:41:35.064695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:41:35.064867       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 09:41:35.065045       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-900552_54775d08-3c91-4609-859b-64a0d9317d69!
	I1025 09:41:35.066080       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d8a483fb-f552-4b77-be2d-138623eccfcc", APIVersion:"v1", ResourceVersion:"566", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-900552_54775d08-3c91-4609-859b-64a0d9317d69 became leader
	W1025 09:41:35.071098       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:41:35.121354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 09:41:35.166194       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-900552_54775d08-3c91-4609-859b-64a0d9317d69!
	W1025 09:41:37.125014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:41:37.132713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:41:39.136061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:41:39.143121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:41:41.146449       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:41:41.152069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c8dd98cb48df2891f229b3b05ed156386c994a00e9f3efa26e22af3581ff2e1c] <==
	W1025 09:52:11.489179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:13.492118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:13.498928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:15.501910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:15.506629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:17.509484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:17.514832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:19.517759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:19.522558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:21.526439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:21.533260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:23.536651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:23.541447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:25.544425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:25.548876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:27.552213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:27.556603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:29.559713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:29.564140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:31.567298       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:31.571750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:33.574477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:33.579371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:35.582531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 09:52:35.588287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-900552 -n functional-900552
helpers_test.go:269: (dbg) Run:  kubectl --context functional-900552 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-6vndh hello-node-connect-7d85dfc575-v99hn
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-900552 describe pod hello-node-75c85bcc94-6vndh hello-node-connect-7d85dfc575-v99hn
helpers_test.go:290: (dbg) kubectl --context functional-900552 describe pod hello-node-75c85bcc94-6vndh hello-node-connect-7d85dfc575-v99hn:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-6vndh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-900552/192.168.49.2
	Start Time:       Sat, 25 Oct 2025 09:42:50 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sg9w9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-sg9w9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m46s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-6vndh to functional-900552
	  Normal   Pulling    7m3s (x5 over 9m47s)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m3s (x5 over 9m47s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m3s (x5 over 9m47s)    kubelet            Error: ErrImagePull
	  Warning  Failed     4m44s (x20 over 9m46s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m31s (x21 over 9m46s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-v99hn
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-900552/192.168.49.2
	Start Time:       Sat, 25 Oct 2025 09:42:34 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2tlkl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2tlkl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-v99hn to functional-900552
	  Normal   Pulling    7m6s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m6s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m6s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Normal   BackOff    4m58s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m58s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-900552 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-900552 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-6vndh" [56d9e6fa-51a9-4e33-9ff2-0e93e94ef697] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1025 09:43:12.541818  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:45:28.677895  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:45:56.383809  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:50:28.677745  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-900552 -n functional-900552
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-25 09:52:51.225956162 +0000 UTC m=+1230.188139647
functional_test.go:1460: (dbg) Run:  kubectl --context functional-900552 describe po hello-node-75c85bcc94-6vndh -n default
functional_test.go:1460: (dbg) kubectl --context functional-900552 describe po hello-node-75c85bcc94-6vndh -n default:
Name:             hello-node-75c85bcc94-6vndh
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-900552/192.168.49.2
Start Time:       Sat, 25 Oct 2025 09:42:50 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-sg9w9 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-sg9w9:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-6vndh to functional-900552
Normal   Pulling    7m17s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m17s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m17s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m58s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m45s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-900552 logs hello-node-75c85bcc94-6vndh -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-900552 logs hello-node-75c85bcc94-6vndh -n default: exit status 1 (118.963354ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-6vndh" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-900552 logs hello-node-75c85bcc94-6vndh -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-900552 service --namespace=default --https --url hello-node: exit status 115 (495.030578ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:32566
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-900552 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-900552 service hello-node --url --format={{.IP}}: exit status 115 (478.017362ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-900552 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-900552 service hello-node --url: exit status 115 (567.021374ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:32566
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-900552 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32566
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 image load --daemon kicbase/echo-server:functional-900552 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-900552 image load --daemon kicbase/echo-server:functional-900552 --alsologtostderr: (1.079367748s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-900552" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 image load --daemon kicbase/echo-server:functional-900552 --alsologtostderr
2025/10/25 09:53:01 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-900552 image load --daemon kicbase/echo-server:functional-900552 --alsologtostderr: (1.193119332s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-900552" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-900552
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 image load --daemon kicbase/echo-server:functional-900552 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-900552" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 image save kicbase/echo-server:functional-900552 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1025 09:53:05.194674  322264 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:53:05.194933  322264 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:53:05.194949  322264 out.go:374] Setting ErrFile to fd 2...
	I1025 09:53:05.194955  322264 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:53:05.195330  322264 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 09:53:05.196016  322264 config.go:182] Loaded profile config "functional-900552": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:53:05.196178  322264 config.go:182] Loaded profile config "functional-900552": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:53:05.196684  322264 cli_runner.go:164] Run: docker container inspect functional-900552 --format={{.State.Status}}
	I1025 09:53:05.218759  322264 ssh_runner.go:195] Run: systemctl --version
	I1025 09:53:05.218824  322264 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-900552
	I1025 09:53:05.238005  322264 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/functional-900552/id_rsa Username:docker}
	I1025 09:53:05.354214  322264 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1025 09:53:05.354280  322264 cache_images.go:254] Failed to load cached images for "functional-900552": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1025 09:53:05.354315  322264 cache_images.go:266] failed pushing to: functional-900552

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-900552
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 image save --daemon kicbase/echo-server:functional-900552 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-900552
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-900552: exit status 1 (21.800479ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-900552

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-900552

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.82s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-593183 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-593183 --output=json --user=testUser: exit status 80 (1.821317287s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6a4ee940-71c8-46e2-93b6-084c3b0fcfac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-593183 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"a84b8df3-2256-4900-ad65-59346e15545c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-25T10:05:28Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"02f7c277-25b0-4dcf-9db2-0542b8ab17d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-593183 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.82s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-593183 --output=json --user=testUser
E1025 10:05:28.677357  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-593183 --output=json --user=testUser: exit status 80 (2.052428645s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0349cfd6-59ee-4424-994a-8e0c85d312f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-593183 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"5fe950f5-7436-4dac-9c40-ebb5673b16a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-25T10:05:30Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"e6389480-c7a9-4aaf-888c-7f1d0207977e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-593183 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.05s)

                                                
                                    
x
+
TestScheduledStopUnix (40.02s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-805745 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-805745 --memory=3072 --driver=docker  --container-runtime=crio: (35.028153647s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-805745 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-805745 -n scheduled-stop-805745
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-805745 --schedule 15s
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:98: process 421835 running but should have been killed on reschedule of stop
panic.go:636: *** TestScheduledStopUnix FAILED at 2025-10-25 10:20:24.451916623 +0000 UTC m=+2883.414100108
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestScheduledStopUnix]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect scheduled-stop-805745
helpers_test.go:243: (dbg) docker inspect scheduled-stop-805745:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e40b8a7053ed238878c6ecc4d8d89db0a1a4518051b2318ca3ad4d43eb52e81a",
	        "Created": "2025-10-25T10:19:54.312590021Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 420034,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:19:54.377705648Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/e40b8a7053ed238878c6ecc4d8d89db0a1a4518051b2318ca3ad4d43eb52e81a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e40b8a7053ed238878c6ecc4d8d89db0a1a4518051b2318ca3ad4d43eb52e81a/hostname",
	        "HostsPath": "/var/lib/docker/containers/e40b8a7053ed238878c6ecc4d8d89db0a1a4518051b2318ca3ad4d43eb52e81a/hosts",
	        "LogPath": "/var/lib/docker/containers/e40b8a7053ed238878c6ecc4d8d89db0a1a4518051b2318ca3ad4d43eb52e81a/e40b8a7053ed238878c6ecc4d8d89db0a1a4518051b2318ca3ad4d43eb52e81a-json.log",
	        "Name": "/scheduled-stop-805745",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "scheduled-stop-805745:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "scheduled-stop-805745",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e40b8a7053ed238878c6ecc4d8d89db0a1a4518051b2318ca3ad4d43eb52e81a",
	                "LowerDir": "/var/lib/docker/overlay2/efe56085b72afa8672ba62ef60e717891b33a0492b640a4a26be594d90d59d7b-init/diff:/var/lib/docker/overlay2/56244b4f60319ba406f5ebe406da3f45bd5764c167111669ae2d84794cf94518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/efe56085b72afa8672ba62ef60e717891b33a0492b640a4a26be594d90d59d7b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/efe56085b72afa8672ba62ef60e717891b33a0492b640a4a26be594d90d59d7b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/efe56085b72afa8672ba62ef60e717891b33a0492b640a4a26be594d90d59d7b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "scheduled-stop-805745",
	                "Source": "/var/lib/docker/volumes/scheduled-stop-805745/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "scheduled-stop-805745",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "scheduled-stop-805745",
	                "name.minikube.sigs.k8s.io": "scheduled-stop-805745",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c9f8c05eb0e4db6a827fe7bb034f2e831d806947028b102b7aff11871a76da24",
	            "SandboxKey": "/var/run/docker/netns/c9f8c05eb0e4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33337"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33338"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33341"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33339"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33340"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "scheduled-stop-805745": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:71:37:22:0c:16",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d91a6458f23b78ef491c49ed83c6a613f38597e6d5a9291c1c0176bde5475572",
	                    "EndpointID": "fce6f97da7221b0a57cba9fa81e23761e4a332b24ee024cfb01e26eb7dfd6993",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "scheduled-stop-805745",
	                        "e40b8a7053ed"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-805745 -n scheduled-stop-805745
helpers_test.go:252: <<< TestScheduledStopUnix FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestScheduledStopUnix]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p scheduled-stop-805745 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p scheduled-stop-805745 logs -n 25: (1.113431886s)
helpers_test.go:260: TestScheduledStopUnix logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │        PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p multinode-919215                                                                                                                                       │ multinode-919215      │ jenkins │ v1.37.0 │ 25 Oct 25 10:14 UTC │ 25 Oct 25 10:14 UTC │
	│ start   │ -p multinode-919215 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-919215      │ jenkins │ v1.37.0 │ 25 Oct 25 10:14 UTC │ 25 Oct 25 10:15 UTC │
	│ node    │ list -p multinode-919215                                                                                                                                  │ multinode-919215      │ jenkins │ v1.37.0 │ 25 Oct 25 10:15 UTC │                     │
	│ node    │ multinode-919215 node delete m03                                                                                                                          │ multinode-919215      │ jenkins │ v1.37.0 │ 25 Oct 25 10:15 UTC │ 25 Oct 25 10:15 UTC │
	│ stop    │ multinode-919215 stop                                                                                                                                     │ multinode-919215      │ jenkins │ v1.37.0 │ 25 Oct 25 10:15 UTC │ 25 Oct 25 10:16 UTC │
	│ start   │ -p multinode-919215 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio                                                          │ multinode-919215      │ jenkins │ v1.37.0 │ 25 Oct 25 10:16 UTC │ 25 Oct 25 10:17 UTC │
	│ node    │ list -p multinode-919215                                                                                                                                  │ multinode-919215      │ jenkins │ v1.37.0 │ 25 Oct 25 10:17 UTC │                     │
	│ start   │ -p multinode-919215-m02 --driver=docker  --container-runtime=crio                                                                                         │ multinode-919215-m02  │ jenkins │ v1.37.0 │ 25 Oct 25 10:17 UTC │                     │
	│ start   │ -p multinode-919215-m03 --driver=docker  --container-runtime=crio                                                                                         │ multinode-919215-m03  │ jenkins │ v1.37.0 │ 25 Oct 25 10:17 UTC │ 25 Oct 25 10:17 UTC │
	│ node    │ add -p multinode-919215                                                                                                                                   │ multinode-919215      │ jenkins │ v1.37.0 │ 25 Oct 25 10:17 UTC │                     │
	│ delete  │ -p multinode-919215-m03                                                                                                                                   │ multinode-919215-m03  │ jenkins │ v1.37.0 │ 25 Oct 25 10:17 UTC │ 25 Oct 25 10:17 UTC │
	│ delete  │ -p multinode-919215                                                                                                                                       │ multinode-919215      │ jenkins │ v1.37.0 │ 25 Oct 25 10:17 UTC │ 25 Oct 25 10:17 UTC │
	│ start   │ -p test-preload-837636 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0 │ test-preload-837636   │ jenkins │ v1.37.0 │ 25 Oct 25 10:17 UTC │ 25 Oct 25 10:18 UTC │
	│ image   │ test-preload-837636 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-837636   │ jenkins │ v1.37.0 │ 25 Oct 25 10:18 UTC │ 25 Oct 25 10:18 UTC │
	│ stop    │ -p test-preload-837636                                                                                                                                    │ test-preload-837636   │ jenkins │ v1.37.0 │ 25 Oct 25 10:18 UTC │ 25 Oct 25 10:18 UTC │
	│ start   │ -p test-preload-837636 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio                                         │ test-preload-837636   │ jenkins │ v1.37.0 │ 25 Oct 25 10:18 UTC │ 25 Oct 25 10:19 UTC │
	│ image   │ test-preload-837636 image list                                                                                                                            │ test-preload-837636   │ jenkins │ v1.37.0 │ 25 Oct 25 10:19 UTC │ 25 Oct 25 10:19 UTC │
	│ delete  │ -p test-preload-837636                                                                                                                                    │ test-preload-837636   │ jenkins │ v1.37.0 │ 25 Oct 25 10:19 UTC │ 25 Oct 25 10:19 UTC │
	│ start   │ -p scheduled-stop-805745 --memory=3072 --driver=docker  --container-runtime=crio                                                                          │ scheduled-stop-805745 │ jenkins │ v1.37.0 │ 25 Oct 25 10:19 UTC │ 25 Oct 25 10:20 UTC │
	│ stop    │ -p scheduled-stop-805745 --schedule 5m                                                                                                                    │ scheduled-stop-805745 │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ stop    │ -p scheduled-stop-805745 --schedule 5m                                                                                                                    │ scheduled-stop-805745 │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ stop    │ -p scheduled-stop-805745 --schedule 5m                                                                                                                    │ scheduled-stop-805745 │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ stop    │ -p scheduled-stop-805745 --schedule 15s                                                                                                                   │ scheduled-stop-805745 │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ stop    │ -p scheduled-stop-805745 --schedule 15s                                                                                                                   │ scheduled-stop-805745 │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ stop    │ -p scheduled-stop-805745 --schedule 15s                                                                                                                   │ scheduled-stop-805745 │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:19:48
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:19:48.916655  419646 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:19:48.916774  419646 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:19:48.916778  419646 out.go:374] Setting ErrFile to fd 2...
	I1025 10:19:48.916782  419646 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:19:48.917036  419646 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 10:19:48.917428  419646 out.go:368] Setting JSON to false
	I1025 10:19:48.918242  419646 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7339,"bootTime":1761380250,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1025 10:19:48.918298  419646 start.go:141] virtualization:  
	I1025 10:19:48.922515  419646 out.go:179] * [scheduled-stop-805745] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:19:48.927323  419646 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 10:19:48.927383  419646 notify.go:220] Checking for updates...
	I1025 10:19:48.934973  419646 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:19:48.938350  419646 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:19:48.941473  419646 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-292167/.minikube
	I1025 10:19:48.944683  419646 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:19:48.947860  419646 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:19:48.951285  419646 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:19:48.979932  419646 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:19:48.980056  419646 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:19:49.046731  419646 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-25 10:19:49.036798218 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:19:49.046874  419646 docker.go:318] overlay module found
	I1025 10:19:49.050065  419646 out.go:179] * Using the docker driver based on user configuration
	I1025 10:19:49.053093  419646 start.go:305] selected driver: docker
	I1025 10:19:49.053101  419646 start.go:925] validating driver "docker" against <nil>
	I1025 10:19:49.053114  419646 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:19:49.053860  419646 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:19:49.109668  419646 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-25 10:19:49.100946218 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:19:49.109809  419646 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 10:19:49.110014  419646 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 10:19:49.113073  419646 out.go:179] * Using Docker driver with root privileges
	I1025 10:19:49.115962  419646 cni.go:84] Creating CNI manager for ""
	I1025 10:19:49.116022  419646 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:19:49.116030  419646 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 10:19:49.116112  419646 start.go:349] cluster config:
	{Name:scheduled-stop-805745 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-805745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:19:49.121181  419646 out.go:179] * Starting "scheduled-stop-805745" primary control-plane node in "scheduled-stop-805745" cluster
	I1025 10:19:49.124048  419646 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:19:49.127015  419646 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:19:49.129947  419646 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:19:49.129996  419646 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 10:19:49.130005  419646 cache.go:58] Caching tarball of preloaded images
	I1025 10:19:49.130030  419646 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:19:49.130096  419646 preload.go:233] Found /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:19:49.130105  419646 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:19:49.130434  419646 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/scheduled-stop-805745/config.json ...
	I1025 10:19:49.130452  419646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/scheduled-stop-805745/config.json: {Name:mkc2bd4c77b9614879b42f01fe895b4a2eecee16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:19:49.148424  419646 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:19:49.148436  419646 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:19:49.148455  419646 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:19:49.148477  419646 start.go:360] acquireMachinesLock for scheduled-stop-805745: {Name:mkb83234f51a820afcdb4005132563443ae29e8c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:19:49.148595  419646 start.go:364] duration metric: took 103.066µs to acquireMachinesLock for "scheduled-stop-805745"
	I1025 10:19:49.148620  419646 start.go:93] Provisioning new machine with config: &{Name:scheduled-stop-805745 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-805745 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:19:49.148687  419646 start.go:125] createHost starting for "" (driver="docker")
	I1025 10:19:49.152077  419646 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 10:19:49.152306  419646 start.go:159] libmachine.API.Create for "scheduled-stop-805745" (driver="docker")
	I1025 10:19:49.152352  419646 client.go:168] LocalClient.Create starting
	I1025 10:19:49.152443  419646 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem
	I1025 10:19:49.152484  419646 main.go:141] libmachine: Decoding PEM data...
	I1025 10:19:49.152500  419646 main.go:141] libmachine: Parsing certificate...
	I1025 10:19:49.152551  419646 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem
	I1025 10:19:49.152566  419646 main.go:141] libmachine: Decoding PEM data...
	I1025 10:19:49.152575  419646 main.go:141] libmachine: Parsing certificate...
	I1025 10:19:49.152941  419646 cli_runner.go:164] Run: docker network inspect scheduled-stop-805745 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 10:19:49.169014  419646 cli_runner.go:211] docker network inspect scheduled-stop-805745 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 10:19:49.169081  419646 network_create.go:284] running [docker network inspect scheduled-stop-805745] to gather additional debugging logs...
	I1025 10:19:49.169097  419646 cli_runner.go:164] Run: docker network inspect scheduled-stop-805745
	W1025 10:19:49.183673  419646 cli_runner.go:211] docker network inspect scheduled-stop-805745 returned with exit code 1
	I1025 10:19:49.183709  419646 network_create.go:287] error running [docker network inspect scheduled-stop-805745]: docker network inspect scheduled-stop-805745: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network scheduled-stop-805745 not found
	I1025 10:19:49.183729  419646 network_create.go:289] output of [docker network inspect scheduled-stop-805745]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network scheduled-stop-805745 not found
	
	** /stderr **
	I1025 10:19:49.183828  419646 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:19:49.199807  419646 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-101b69e1e09b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d2:49:dd:18:ab:21} reservation:<nil>}
	I1025 10:19:49.200030  419646 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fee160f0176f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:62:3b:42:f8:f5:b2} reservation:<nil>}
	I1025 10:19:49.200313  419646 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5368f13f34ad IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2e:30:83:3d:80:30} reservation:<nil>}
	I1025 10:19:49.200620  419646 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019ee480}
	I1025 10:19:49.200635  419646 network_create.go:124] attempt to create docker network scheduled-stop-805745 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 10:19:49.200697  419646 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=scheduled-stop-805745 scheduled-stop-805745
	I1025 10:19:49.256639  419646 network_create.go:108] docker network scheduled-stop-805745 192.168.76.0/24 created
	I1025 10:19:49.256663  419646 kic.go:121] calculated static IP "192.168.76.2" for the "scheduled-stop-805745" container
	I1025 10:19:49.256733  419646 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 10:19:49.272232  419646 cli_runner.go:164] Run: docker volume create scheduled-stop-805745 --label name.minikube.sigs.k8s.io=scheduled-stop-805745 --label created_by.minikube.sigs.k8s.io=true
	I1025 10:19:49.290016  419646 oci.go:103] Successfully created a docker volume scheduled-stop-805745
	I1025 10:19:49.290108  419646 cli_runner.go:164] Run: docker run --rm --name scheduled-stop-805745-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-805745 --entrypoint /usr/bin/test -v scheduled-stop-805745:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 10:19:49.797760  419646 oci.go:107] Successfully prepared a docker volume scheduled-stop-805745
	I1025 10:19:49.797806  419646 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:19:49.797826  419646 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 10:19:49.797900  419646 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-805745:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 10:19:54.240051  419646 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-805745:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.442115211s)
	I1025 10:19:54.240071  419646 kic.go:203] duration metric: took 4.442242409s to extract preloaded images to volume ...
	W1025 10:19:54.240231  419646 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1025 10:19:54.240335  419646 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 10:19:54.297032  419646 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname scheduled-stop-805745 --name scheduled-stop-805745 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-805745 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=scheduled-stop-805745 --network scheduled-stop-805745 --ip 192.168.76.2 --volume scheduled-stop-805745:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 10:19:54.598412  419646 cli_runner.go:164] Run: docker container inspect scheduled-stop-805745 --format={{.State.Running}}
	I1025 10:19:54.620858  419646 cli_runner.go:164] Run: docker container inspect scheduled-stop-805745 --format={{.State.Status}}
	I1025 10:19:54.647410  419646 cli_runner.go:164] Run: docker exec scheduled-stop-805745 stat /var/lib/dpkg/alternatives/iptables
	I1025 10:19:54.702364  419646 oci.go:144] the created container "scheduled-stop-805745" has a running status.
	I1025 10:19:54.702383  419646 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21794-292167/.minikube/machines/scheduled-stop-805745/id_rsa...
	I1025 10:19:55.160006  419646 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21794-292167/.minikube/machines/scheduled-stop-805745/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 10:19:55.181099  419646 cli_runner.go:164] Run: docker container inspect scheduled-stop-805745 --format={{.State.Status}}
	I1025 10:19:55.197853  419646 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 10:19:55.197863  419646 kic_runner.go:114] Args: [docker exec --privileged scheduled-stop-805745 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 10:19:55.237168  419646 cli_runner.go:164] Run: docker container inspect scheduled-stop-805745 --format={{.State.Status}}
	I1025 10:19:55.254870  419646 machine.go:93] provisionDockerMachine start ...
	I1025 10:19:55.254952  419646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-805745
	I1025 10:19:55.276479  419646 main.go:141] libmachine: Using SSH client type: native
	I1025 10:19:55.276830  419646 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33337 <nil> <nil>}
	I1025 10:19:55.276838  419646 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:19:55.277447  419646 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46320->127.0.0.1:33337: read: connection reset by peer
	I1025 10:19:58.427809  419646 main.go:141] libmachine: SSH cmd err, output: <nil>: scheduled-stop-805745
	
	I1025 10:19:58.427825  419646 ubuntu.go:182] provisioning hostname "scheduled-stop-805745"
	I1025 10:19:58.427903  419646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-805745
	I1025 10:19:58.447752  419646 main.go:141] libmachine: Using SSH client type: native
	I1025 10:19:58.448053  419646 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33337 <nil> <nil>}
	I1025 10:19:58.448062  419646 main.go:141] libmachine: About to run SSH command:
	sudo hostname scheduled-stop-805745 && echo "scheduled-stop-805745" | sudo tee /etc/hostname
	I1025 10:19:58.605158  419646 main.go:141] libmachine: SSH cmd err, output: <nil>: scheduled-stop-805745
	
	I1025 10:19:58.605225  419646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-805745
	I1025 10:19:58.624862  419646 main.go:141] libmachine: Using SSH client type: native
	I1025 10:19:58.625175  419646 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33337 <nil> <nil>}
	I1025 10:19:58.625189  419646 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sscheduled-stop-805745' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 scheduled-stop-805745/g' /etc/hosts;
				else 
					echo '127.0.1.1 scheduled-stop-805745' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:19:58.771470  419646 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:19:58.771488  419646 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-292167/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-292167/.minikube}
	I1025 10:19:58.771510  419646 ubuntu.go:190] setting up certificates
	I1025 10:19:58.771518  419646 provision.go:84] configureAuth start
	I1025 10:19:58.771580  419646 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-805745
	I1025 10:19:58.788233  419646 provision.go:143] copyHostCerts
	I1025 10:19:58.788299  419646 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem, removing ...
	I1025 10:19:58.788307  419646 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem
	I1025 10:19:58.788384  419646 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem (1082 bytes)
	I1025 10:19:58.788480  419646 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem, removing ...
	I1025 10:19:58.788484  419646 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem
	I1025 10:19:58.788510  419646 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem (1123 bytes)
	I1025 10:19:58.788570  419646 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem, removing ...
	I1025 10:19:58.788574  419646 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem
	I1025 10:19:58.788597  419646 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem (1675 bytes)
	I1025 10:19:58.788649  419646 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem org=jenkins.scheduled-stop-805745 san=[127.0.0.1 192.168.76.2 localhost minikube scheduled-stop-805745]
	I1025 10:19:59.003585  419646 provision.go:177] copyRemoteCerts
	I1025 10:19:59.003644  419646 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:19:59.003686  419646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-805745
	I1025 10:19:59.021750  419646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33337 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/scheduled-stop-805745/id_rsa Username:docker}
	I1025 10:19:59.126801  419646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 10:19:59.144953  419646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1025 10:19:59.163853  419646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:19:59.181904  419646 provision.go:87] duration metric: took 410.361701ms to configureAuth
	I1025 10:19:59.181921  419646 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:19:59.182115  419646 config.go:182] Loaded profile config "scheduled-stop-805745": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:19:59.182228  419646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-805745
	I1025 10:19:59.198906  419646 main.go:141] libmachine: Using SSH client type: native
	I1025 10:19:59.199247  419646 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33337 <nil> <nil>}
	I1025 10:19:59.199260  419646 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:19:59.454271  419646 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:19:59.454285  419646 machine.go:96] duration metric: took 4.199402372s to provisionDockerMachine
	I1025 10:19:59.454293  419646 client.go:171] duration metric: took 10.301936823s to LocalClient.Create
	I1025 10:19:59.454303  419646 start.go:167] duration metric: took 10.30199828s to libmachine.API.Create "scheduled-stop-805745"
	I1025 10:19:59.454309  419646 start.go:293] postStartSetup for "scheduled-stop-805745" (driver="docker")
	I1025 10:19:59.454338  419646 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:19:59.454400  419646 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:19:59.454442  419646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-805745
	I1025 10:19:59.471819  419646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33337 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/scheduled-stop-805745/id_rsa Username:docker}
	I1025 10:19:59.575336  419646 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:19:59.578571  419646 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:19:59.578589  419646 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:19:59.578600  419646 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/addons for local assets ...
	I1025 10:19:59.578654  419646 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/files for local assets ...
	I1025 10:19:59.578732  419646 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem -> 2940172.pem in /etc/ssl/certs
	I1025 10:19:59.578831  419646 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:19:59.586044  419646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 10:19:59.603073  419646 start.go:296] duration metric: took 148.745788ms for postStartSetup
	I1025 10:19:59.603456  419646 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-805745
	I1025 10:19:59.619944  419646 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/scheduled-stop-805745/config.json ...
	I1025 10:19:59.620305  419646 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:19:59.620354  419646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-805745
	I1025 10:19:59.637166  419646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33337 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/scheduled-stop-805745/id_rsa Username:docker}
	I1025 10:19:59.740069  419646 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:19:59.745394  419646 start.go:128] duration metric: took 10.596693995s to createHost
	I1025 10:19:59.745409  419646 start.go:83] releasing machines lock for "scheduled-stop-805745", held for 10.596806719s
	I1025 10:19:59.745479  419646 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-805745
	I1025 10:19:59.764603  419646 ssh_runner.go:195] Run: cat /version.json
	I1025 10:19:59.764644  419646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-805745
	I1025 10:19:59.764912  419646 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:19:59.764971  419646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-805745
	I1025 10:19:59.787456  419646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33337 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/scheduled-stop-805745/id_rsa Username:docker}
	I1025 10:19:59.787567  419646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33337 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/scheduled-stop-805745/id_rsa Username:docker}
	I1025 10:19:59.890889  419646 ssh_runner.go:195] Run: systemctl --version
	I1025 10:19:59.898180  419646 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:19:59.988840  419646 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:19:59.993810  419646 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:19:59.993882  419646 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:20:00.086584  419646 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1025 10:20:00.086600  419646 start.go:495] detecting cgroup driver to use...
	I1025 10:20:00.086654  419646 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:20:00.086725  419646 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:20:00.125981  419646 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:20:00.154774  419646 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:20:00.154840  419646 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:20:00.192940  419646 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:20:00.228811  419646 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:20:00.446628  419646 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:20:00.624868  419646 docker.go:234] disabling docker service ...
	I1025 10:20:00.624938  419646 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:20:00.651939  419646 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:20:00.670348  419646 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:20:00.801742  419646 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:20:00.927854  419646 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:20:00.941486  419646 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:20:00.957508  419646 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:20:00.957578  419646 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:00.966991  419646 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:20:00.967050  419646 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:00.976596  419646 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:00.985184  419646 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:00.993787  419646 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:20:01.002396  419646 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:01.012043  419646 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:01.025557  419646 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:20:01.034349  419646 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:20:01.042286  419646 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:20:01.049867  419646 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:20:01.173071  419646 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:20:01.301705  419646 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:20:01.301768  419646 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:20:01.305998  419646 start.go:563] Will wait 60s for crictl version
	I1025 10:20:01.306057  419646 ssh_runner.go:195] Run: which crictl
	I1025 10:20:01.310347  419646 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:20:01.336207  419646 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:20:01.336285  419646 ssh_runner.go:195] Run: crio --version
	I1025 10:20:01.364901  419646 ssh_runner.go:195] Run: crio --version
	I1025 10:20:01.398011  419646 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:20:01.400806  419646 cli_runner.go:164] Run: docker network inspect scheduled-stop-805745 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:20:01.417289  419646 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1025 10:20:01.422647  419646 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:20:01.433325  419646 kubeadm.go:883] updating cluster {Name:scheduled-stop-805745 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-805745 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:20:01.433432  419646 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:20:01.433489  419646 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:20:01.471218  419646 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:20:01.471230  419646 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:20:01.471289  419646 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:20:01.497904  419646 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:20:01.497917  419646 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:20:01.497924  419646 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1025 10:20:01.498007  419646 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=scheduled-stop-805745 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-805745 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:20:01.498087  419646 ssh_runner.go:195] Run: crio config
	I1025 10:20:01.552998  419646 cni.go:84] Creating CNI manager for ""
	I1025 10:20:01.553010  419646 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:20:01.553027  419646 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:20:01.553050  419646 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:scheduled-stop-805745 NodeName:scheduled-stop-805745 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:20:01.553167  419646 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "scheduled-stop-805745"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:20:01.553236  419646 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:20:01.561304  419646 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:20:01.561396  419646 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:20:01.569422  419646 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I1025 10:20:01.583126  419646 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:20:01.596307  419646 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1025 10:20:01.608965  419646 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:20:01.612716  419646 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:20:01.622934  419646 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:20:01.733551  419646 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:20:01.750028  419646 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/scheduled-stop-805745 for IP: 192.168.76.2
	I1025 10:20:01.750040  419646 certs.go:195] generating shared ca certs ...
	I1025 10:20:01.750056  419646 certs.go:227] acquiring lock for ca certs: {Name:mk9438893ca511bbb3aa6154ee7b6f94b409696d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:20:01.750199  419646 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key
	I1025 10:20:01.750239  419646 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key
	I1025 10:20:01.750245  419646 certs.go:257] generating profile certs ...
	I1025 10:20:01.750301  419646 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/scheduled-stop-805745/client.key
	I1025 10:20:01.750320  419646 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/scheduled-stop-805745/client.crt with IP's: []
	I1025 10:20:02.030128  419646 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/scheduled-stop-805745/client.crt ...
	I1025 10:20:02.030146  419646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/scheduled-stop-805745/client.crt: {Name:mk47ad2ce25b8f98e97c98ea98b62de555b0ba9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:20:02.030382  419646 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/scheduled-stop-805745/client.key ...
	I1025 10:20:02.030391  419646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/scheduled-stop-805745/client.key: {Name:mkce3a69b0d621210e0054c29ddf9fdf777e7ffd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:20:02.030493  419646 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/scheduled-stop-805745/apiserver.key.e2286655
	I1025 10:20:02.030510  419646 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/scheduled-stop-805745/apiserver.crt.e2286655 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1025 10:20:02.921008  419646 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/scheduled-stop-805745/apiserver.crt.e2286655 ...
	I1025 10:20:02.921024  419646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/scheduled-stop-805745/apiserver.crt.e2286655: {Name:mkcfbf85cb8877995a373929305b5d09ef55df52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:20:02.921227  419646 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/scheduled-stop-805745/apiserver.key.e2286655 ...
	I1025 10:20:02.921235  419646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/scheduled-stop-805745/apiserver.key.e2286655: {Name:mkc52bb4b5eb8d2cafb675117e517f67a7b173d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:20:02.921318  419646 certs.go:382] copying /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/scheduled-stop-805745/apiserver.crt.e2286655 -> /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/scheduled-stop-805745/apiserver.crt
	I1025 10:20:02.921397  419646 certs.go:386] copying /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/scheduled-stop-805745/apiserver.key.e2286655 -> /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/scheduled-stop-805745/apiserver.key
	I1025 10:20:02.921449  419646 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/scheduled-stop-805745/proxy-client.key
	I1025 10:20:02.921461  419646 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/scheduled-stop-805745/proxy-client.crt with IP's: []
	I1025 10:20:03.031191  419646 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/scheduled-stop-805745/proxy-client.crt ...
	I1025 10:20:03.031206  419646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/scheduled-stop-805745/proxy-client.crt: {Name:mk21a6f4a3d9e1837eb0393ff79471b7ec20361c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:20:03.031390  419646 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/scheduled-stop-805745/proxy-client.key ...
	I1025 10:20:03.031398  419646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/scheduled-stop-805745/proxy-client.key: {Name:mke6811f97dfabd79fa52129ba958b7514615ec6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:20:03.031591  419646 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem (1338 bytes)
	W1025 10:20:03.031624  419646 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017_empty.pem, impossibly tiny 0 bytes
	I1025 10:20:03.031631  419646 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 10:20:03.031654  419646 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem (1082 bytes)
	I1025 10:20:03.031675  419646 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:20:03.031695  419646 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem (1675 bytes)
	I1025 10:20:03.031737  419646 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 10:20:03.032375  419646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:20:03.052559  419646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:20:03.071471  419646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:20:03.090435  419646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:20:03.108391  419646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/scheduled-stop-805745/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1025 10:20:03.126227  419646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/scheduled-stop-805745/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:20:03.144375  419646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/scheduled-stop-805745/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:20:03.161687  419646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/scheduled-stop-805745/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 10:20:03.179310  419646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:20:03.196839  419646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem --> /usr/share/ca-certificates/294017.pem (1338 bytes)
	I1025 10:20:03.216012  419646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /usr/share/ca-certificates/2940172.pem (1708 bytes)
	I1025 10:20:03.234635  419646 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:20:03.247912  419646 ssh_runner.go:195] Run: openssl version
	I1025 10:20:03.254128  419646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294017.pem && ln -fs /usr/share/ca-certificates/294017.pem /etc/ssl/certs/294017.pem"
	I1025 10:20:03.262324  419646 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294017.pem
	I1025 10:20:03.266001  419646 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:39 /usr/share/ca-certificates/294017.pem
	I1025 10:20:03.266051  419646 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294017.pem
	I1025 10:20:03.312709  419646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294017.pem /etc/ssl/certs/51391683.0"
	I1025 10:20:03.321612  419646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940172.pem && ln -fs /usr/share/ca-certificates/2940172.pem /etc/ssl/certs/2940172.pem"
	I1025 10:20:03.329851  419646 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940172.pem
	I1025 10:20:03.333891  419646 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:39 /usr/share/ca-certificates/2940172.pem
	I1025 10:20:03.333947  419646 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940172.pem
	I1025 10:20:03.375644  419646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940172.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:20:03.383732  419646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:20:03.391698  419646 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:20:03.395234  419646 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:20:03.395287  419646 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:20:03.437904  419646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:20:03.446271  419646 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:20:03.449955  419646 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 10:20:03.449999  419646 kubeadm.go:400] StartCluster: {Name:scheduled-stop-805745 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-805745 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:20:03.450067  419646 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:20:03.450127  419646 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:20:03.476839  419646 cri.go:89] found id: ""
	I1025 10:20:03.476909  419646 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:20:03.484687  419646 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 10:20:03.492741  419646 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 10:20:03.492793  419646 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 10:20:03.500699  419646 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 10:20:03.500707  419646 kubeadm.go:157] found existing configuration files:
	
	I1025 10:20:03.500768  419646 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 10:20:03.508667  419646 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 10:20:03.508742  419646 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 10:20:03.516595  419646 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 10:20:03.524475  419646 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 10:20:03.524528  419646 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 10:20:03.531961  419646 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 10:20:03.539893  419646 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 10:20:03.539946  419646 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 10:20:03.547571  419646 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 10:20:03.555188  419646 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 10:20:03.555250  419646 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 10:20:03.562699  419646 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 10:20:03.600943  419646 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 10:20:03.600994  419646 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 10:20:03.628192  419646 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 10:20:03.628259  419646 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1025 10:20:03.628294  419646 kubeadm.go:318] OS: Linux
	I1025 10:20:03.628342  419646 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 10:20:03.628391  419646 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1025 10:20:03.628440  419646 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 10:20:03.628489  419646 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 10:20:03.628538  419646 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 10:20:03.628588  419646 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 10:20:03.628634  419646 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 10:20:03.628683  419646 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 10:20:03.628730  419646 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1025 10:20:03.696366  419646 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 10:20:03.696480  419646 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 10:20:03.696573  419646 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 10:20:03.707564  419646 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 10:20:03.714145  419646 out.go:252]   - Generating certificates and keys ...
	I1025 10:20:03.714232  419646 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 10:20:03.714300  419646 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 10:20:04.031657  419646 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 10:20:04.526412  419646 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 10:20:05.280985  419646 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 10:20:05.989928  419646 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 10:20:06.703523  419646 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 10:20:06.703661  419646 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost scheduled-stop-805745] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 10:20:07.232405  419646 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 10:20:07.232788  419646 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost scheduled-stop-805745] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 10:20:07.537541  419646 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 10:20:07.871809  419646 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 10:20:08.711431  419646 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 10:20:08.711670  419646 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 10:20:09.336392  419646 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 10:20:10.477714  419646 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 10:20:11.017457  419646 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 10:20:11.991496  419646 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 10:20:12.331079  419646 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 10:20:12.333612  419646 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 10:20:12.336412  419646 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 10:20:12.340124  419646 out.go:252]   - Booting up control plane ...
	I1025 10:20:12.340234  419646 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 10:20:12.340315  419646 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 10:20:12.341409  419646 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 10:20:12.358521  419646 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 10:20:12.358628  419646 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 10:20:12.366833  419646 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 10:20:12.366930  419646 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 10:20:12.366970  419646 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 10:20:12.499904  419646 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 10:20:12.500107  419646 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 10:20:14.006017  419646 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.503550091s
	I1025 10:20:14.007363  419646 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 10:20:14.007456  419646 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1025 10:20:14.007548  419646 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 10:20:14.007629  419646 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 10:20:17.819740  419646 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.811766329s
	I1025 10:20:18.726666  419646 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.719299917s
	I1025 10:20:20.509977  419646 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.502250573s
	I1025 10:20:20.529569  419646 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 10:20:20.545940  419646 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 10:20:20.563478  419646 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 10:20:20.563907  419646 kubeadm.go:318] [mark-control-plane] Marking the node scheduled-stop-805745 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 10:20:20.576291  419646 kubeadm.go:318] [bootstrap-token] Using token: yavpji.ypdfo5euqfrpllg3
	I1025 10:20:20.579281  419646 out.go:252]   - Configuring RBAC rules ...
	I1025 10:20:20.579424  419646 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 10:20:20.584042  419646 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 10:20:20.595092  419646 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 10:20:20.599515  419646 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 10:20:20.605897  419646 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 10:20:20.609926  419646 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 10:20:20.919420  419646 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 10:20:21.360914  419646 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 10:20:21.916839  419646 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 10:20:21.918012  419646 kubeadm.go:318] 
	I1025 10:20:21.918080  419646 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 10:20:21.918085  419646 kubeadm.go:318] 
	I1025 10:20:21.918164  419646 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 10:20:21.918168  419646 kubeadm.go:318] 
	I1025 10:20:21.918193  419646 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 10:20:21.918253  419646 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 10:20:21.918305  419646 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 10:20:21.918308  419646 kubeadm.go:318] 
	I1025 10:20:21.918363  419646 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 10:20:21.918367  419646 kubeadm.go:318] 
	I1025 10:20:21.918415  419646 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 10:20:21.918419  419646 kubeadm.go:318] 
	I1025 10:20:21.918473  419646 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 10:20:21.918549  419646 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 10:20:21.918619  419646 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 10:20:21.918622  419646 kubeadm.go:318] 
	I1025 10:20:21.918709  419646 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 10:20:21.918788  419646 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 10:20:21.918797  419646 kubeadm.go:318] 
	I1025 10:20:21.918883  419646 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token yavpji.ypdfo5euqfrpllg3 \
	I1025 10:20:21.918990  419646 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3fbde57ac93e639221cdfe118779e4c6254e9915ac4b07aafbe95a9f86bce8d0 \
	I1025 10:20:21.919010  419646 kubeadm.go:318] 	--control-plane 
	I1025 10:20:21.919013  419646 kubeadm.go:318] 
	I1025 10:20:21.919101  419646 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 10:20:21.919104  419646 kubeadm.go:318] 
	I1025 10:20:21.919214  419646 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token yavpji.ypdfo5euqfrpllg3 \
	I1025 10:20:21.919326  419646 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3fbde57ac93e639221cdfe118779e4c6254e9915ac4b07aafbe95a9f86bce8d0 
	I1025 10:20:21.923293  419646 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1025 10:20:21.923546  419646 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1025 10:20:21.923671  419646 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 10:20:21.923707  419646 cni.go:84] Creating CNI manager for ""
	I1025 10:20:21.923719  419646 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:20:21.926958  419646 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 10:20:21.929977  419646 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 10:20:21.934186  419646 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 10:20:21.934196  419646 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 10:20:21.950813  419646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 10:20:22.244828  419646 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 10:20:22.244951  419646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:20:22.245027  419646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes scheduled-stop-805745 minikube.k8s.io/updated_at=2025_10_25T10_20_22_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53 minikube.k8s.io/name=scheduled-stop-805745 minikube.k8s.io/primary=true
	I1025 10:20:22.384387  419646 ops.go:34] apiserver oom_adj: -16
	I1025 10:20:22.384408  419646 kubeadm.go:1113] duration metric: took 139.513331ms to wait for elevateKubeSystemPrivileges
	I1025 10:20:22.384421  419646 kubeadm.go:402] duration metric: took 18.934426093s to StartCluster
	I1025 10:20:22.384436  419646 settings.go:142] acquiring lock: {Name:mkea67a33832f2ea491a4c745ccd836174c61655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:20:22.384493  419646 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:20:22.385179  419646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/kubeconfig: {Name:mk3dba620aff9c31a3afd43d9db4d9ff5be75367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:20:22.385379  419646 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:20:22.385464  419646 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 10:20:22.385684  419646 config.go:182] Loaded profile config "scheduled-stop-805745": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:20:22.385713  419646 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:20:22.385768  419646 addons.go:69] Setting storage-provisioner=true in profile "scheduled-stop-805745"
	I1025 10:20:22.385785  419646 addons.go:238] Setting addon storage-provisioner=true in "scheduled-stop-805745"
	I1025 10:20:22.385804  419646 host.go:66] Checking if "scheduled-stop-805745" exists ...
	I1025 10:20:22.386231  419646 cli_runner.go:164] Run: docker container inspect scheduled-stop-805745 --format={{.State.Status}}
	I1025 10:20:22.386660  419646 addons.go:69] Setting default-storageclass=true in profile "scheduled-stop-805745"
	I1025 10:20:22.386675  419646 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "scheduled-stop-805745"
	I1025 10:20:22.386936  419646 cli_runner.go:164] Run: docker container inspect scheduled-stop-805745 --format={{.State.Status}}
	I1025 10:20:22.388652  419646 out.go:179] * Verifying Kubernetes components...
	I1025 10:20:22.392481  419646 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:20:22.441821  419646 addons.go:238] Setting addon default-storageclass=true in "scheduled-stop-805745"
	I1025 10:20:22.441850  419646 host.go:66] Checking if "scheduled-stop-805745" exists ...
	I1025 10:20:22.442264  419646 cli_runner.go:164] Run: docker container inspect scheduled-stop-805745 --format={{.State.Status}}
	I1025 10:20:22.442445  419646 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:20:22.445306  419646 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:20:22.445317  419646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:20:22.445380  419646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-805745
	I1025 10:20:22.467445  419646 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:20:22.467465  419646 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:20:22.467552  419646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-805745
	I1025 10:20:22.502418  419646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33337 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/scheduled-stop-805745/id_rsa Username:docker}
	I1025 10:20:22.508939  419646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33337 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/scheduled-stop-805745/id_rsa Username:docker}
	I1025 10:20:22.724629  419646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:20:22.734929  419646 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 10:20:22.735024  419646 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:20:22.827440  419646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:20:23.306519  419646 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1025 10:20:23.307358  419646 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:20:23.307562  419646 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:20:23.323759  419646 api_server.go:72] duration metric: took 938.353805ms to wait for apiserver process to appear ...
	I1025 10:20:23.323773  419646 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:20:23.323790  419646 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:20:23.347633  419646 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1025 10:20:23.349170  419646 api_server.go:141] control plane version: v1.34.1
	I1025 10:20:23.349186  419646 api_server.go:131] duration metric: took 25.408756ms to wait for apiserver health ...
	I1025 10:20:23.349193  419646 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:20:23.355010  419646 system_pods.go:59] 5 kube-system pods found
	I1025 10:20:23.355028  419646 system_pods.go:61] "etcd-scheduled-stop-805745" [10593128-e804-4791-82d6-22544698e6da] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:20:23.355033  419646 system_pods.go:61] "kube-apiserver-scheduled-stop-805745" [f325f52b-6a06-451d-afa8-34d50662abd6] Running
	I1025 10:20:23.355040  419646 system_pods.go:61] "kube-controller-manager-scheduled-stop-805745" [7a24ea0f-b829-47f7-ba91-9b30d853261b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:20:23.355046  419646 system_pods.go:61] "kube-scheduler-scheduled-stop-805745" [eed2ba11-e848-487a-b844-2d620f38ea74] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:20:23.355051  419646 system_pods.go:61] "storage-provisioner" [30ba5ad5-ac50-46bb-8bb5-881a8c6eace3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1025 10:20:23.355056  419646 system_pods.go:74] duration metric: took 5.858244ms to wait for pod list to return data ...
	I1025 10:20:23.355066  419646 kubeadm.go:586] duration metric: took 969.668355ms to wait for: map[apiserver:true system_pods:true]
	I1025 10:20:23.355077  419646 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:20:23.357483  419646 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1025 10:20:23.358174  419646 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:20:23.358192  419646 node_conditions.go:123] node cpu capacity is 2
	I1025 10:20:23.358203  419646 node_conditions.go:105] duration metric: took 3.121527ms to run NodePressure ...
	I1025 10:20:23.358214  419646 start.go:241] waiting for startup goroutines ...
	I1025 10:20:23.360459  419646 addons.go:514] duration metric: took 974.728221ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1025 10:20:23.811081  419646 kapi.go:214] "coredns" deployment in "kube-system" namespace and "scheduled-stop-805745" context rescaled to 1 replicas
	I1025 10:20:23.811110  419646 start.go:246] waiting for cluster config update ...
	I1025 10:20:23.811121  419646 start.go:255] writing updated cluster config ...
	I1025 10:20:23.811462  419646 ssh_runner.go:195] Run: rm -f paused
	I1025 10:20:23.871286  419646 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 10:20:23.874408  419646 out.go:179] * Done! kubectl is now configured to use "scheduled-stop-805745" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 10:20:14 scheduled-stop-805745 crio[840]: time="2025-10-25T10:20:14.104862183Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:14 scheduled-stop-805745 crio[840]: time="2025-10-25T10:20:14.108613404Z" level=info msg="Creating container: kube-system/kube-scheduler-scheduled-stop-805745/kube-scheduler" id=18c2d518-9ad4-40b3-aa52-1e408a24fa31 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:20:14 scheduled-stop-805745 crio[840]: time="2025-10-25T10:20:14.108773636Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:14 scheduled-stop-805745 crio[840]: time="2025-10-25T10:20:14.114599018Z" level=info msg="Creating container: kube-system/etcd-scheduled-stop-805745/etcd" id=f6ef9f7d-0354-4f15-b969-ace877744a30 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:20:14 scheduled-stop-805745 crio[840]: time="2025-10-25T10:20:14.114896072Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:14 scheduled-stop-805745 crio[840]: time="2025-10-25T10:20:14.116533198Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:14 scheduled-stop-805745 crio[840]: time="2025-10-25T10:20:14.117158092Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:14 scheduled-stop-805745 crio[840]: time="2025-10-25T10:20:14.125134379Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:14 scheduled-stop-805745 crio[840]: time="2025-10-25T10:20:14.125873351Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:14 scheduled-stop-805745 crio[840]: time="2025-10-25T10:20:14.13918817Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:14 scheduled-stop-805745 crio[840]: time="2025-10-25T10:20:14.139950847Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:14 scheduled-stop-805745 crio[840]: time="2025-10-25T10:20:14.150709103Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:14 scheduled-stop-805745 crio[840]: time="2025-10-25T10:20:14.151943409Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:20:14 scheduled-stop-805745 crio[840]: time="2025-10-25T10:20:14.163475361Z" level=info msg="Created container 8fec78605c04cb76e11cfb51689670a4b220fadd9ad8663f7d4c28f30c687c6b: kube-system/kube-controller-manager-scheduled-stop-805745/kube-controller-manager" id=8e0d4843-58fb-450a-87da-3b1cc63bcda7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:20:14 scheduled-stop-805745 crio[840]: time="2025-10-25T10:20:14.164487617Z" level=info msg="Starting container: 8fec78605c04cb76e11cfb51689670a4b220fadd9ad8663f7d4c28f30c687c6b" id=67739600-f3a3-4ca5-abbe-e98e1db7c0dc name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:20:14 scheduled-stop-805745 crio[840]: time="2025-10-25T10:20:14.167023618Z" level=info msg="Created container ad9ecdf7a85d3d5ce31920e9434bd5cf90b6062ac20660405103e74b6f52dc9c: kube-system/kube-scheduler-scheduled-stop-805745/kube-scheduler" id=18c2d518-9ad4-40b3-aa52-1e408a24fa31 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:20:14 scheduled-stop-805745 crio[840]: time="2025-10-25T10:20:14.171430995Z" level=info msg="Started container" PID=1244 containerID=8fec78605c04cb76e11cfb51689670a4b220fadd9ad8663f7d4c28f30c687c6b description=kube-system/kube-controller-manager-scheduled-stop-805745/kube-controller-manager id=67739600-f3a3-4ca5-abbe-e98e1db7c0dc name=/runtime.v1.RuntimeService/StartContainer sandboxID=597e32d0ae2159dded07e138215e5a12f4f5971241aaf01f1bd7bff0a750c53b
	Oct 25 10:20:14 scheduled-stop-805745 crio[840]: time="2025-10-25T10:20:14.171966002Z" level=info msg="Starting container: ad9ecdf7a85d3d5ce31920e9434bd5cf90b6062ac20660405103e74b6f52dc9c" id=adbd087d-3024-4314-803f-26dab1beae1d name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:20:14 scheduled-stop-805745 crio[840]: time="2025-10-25T10:20:14.182788874Z" level=info msg="Started container" PID=1249 containerID=ad9ecdf7a85d3d5ce31920e9434bd5cf90b6062ac20660405103e74b6f52dc9c description=kube-system/kube-scheduler-scheduled-stop-805745/kube-scheduler id=adbd087d-3024-4314-803f-26dab1beae1d name=/runtime.v1.RuntimeService/StartContainer sandboxID=d394921a73c2a960f6b24c2759c346f60974528f13444592ecbce6a93e7ab499
	Oct 25 10:20:14 scheduled-stop-805745 crio[840]: time="2025-10-25T10:20:14.193520627Z" level=info msg="Created container 066afb3f4a3cfc98b91e9993ce426e31e80e9a02352c5361fbad8cb0aaff9c35: kube-system/kube-apiserver-scheduled-stop-805745/kube-apiserver" id=a23239f3-37f0-40e8-ab2d-e0aaacedbadf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:20:14 scheduled-stop-805745 crio[840]: time="2025-10-25T10:20:14.194401049Z" level=info msg="Starting container: 066afb3f4a3cfc98b91e9993ce426e31e80e9a02352c5361fbad8cb0aaff9c35" id=e1bd50a2-b49f-41cf-9598-b1d265a9d305 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:20:14 scheduled-stop-805745 crio[840]: time="2025-10-25T10:20:14.19662265Z" level=info msg="Started container" PID=1261 containerID=066afb3f4a3cfc98b91e9993ce426e31e80e9a02352c5361fbad8cb0aaff9c35 description=kube-system/kube-apiserver-scheduled-stop-805745/kube-apiserver id=e1bd50a2-b49f-41cf-9598-b1d265a9d305 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b372ef9c96100911444a7bab634e831d8b30ca76483cbd610d2fd966febe2e06
	Oct 25 10:20:14 scheduled-stop-805745 crio[840]: time="2025-10-25T10:20:14.201268751Z" level=info msg="Created container dee92fab30b55ddac2150d9dcb73766d6213dc757aa6a620e34df401959f9676: kube-system/etcd-scheduled-stop-805745/etcd" id=f6ef9f7d-0354-4f15-b969-ace877744a30 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:20:14 scheduled-stop-805745 crio[840]: time="2025-10-25T10:20:14.202180377Z" level=info msg="Starting container: dee92fab30b55ddac2150d9dcb73766d6213dc757aa6a620e34df401959f9676" id=028fcd15-64d1-4a9f-ad05-34e8f4fe5a33 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:20:14 scheduled-stop-805745 crio[840]: time="2025-10-25T10:20:14.208297259Z" level=info msg="Started container" PID=1262 containerID=dee92fab30b55ddac2150d9dcb73766d6213dc757aa6a620e34df401959f9676 description=kube-system/etcd-scheduled-stop-805745/etcd id=028fcd15-64d1-4a9f-ad05-34e8f4fe5a33 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9e815f6a20e0f610cf3a0bcdca9b73319ae0ff1919440effa93590161d369e04
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                             NAMESPACE
	dee92fab30b55       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   11 seconds ago      Running             etcd                      0                   9e815f6a20e0f       etcd-scheduled-stop-805745                      kube-system
	066afb3f4a3cf       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   11 seconds ago      Running             kube-apiserver            0                   b372ef9c96100       kube-apiserver-scheduled-stop-805745            kube-system
	ad9ecdf7a85d3       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   11 seconds ago      Running             kube-scheduler            0                   d394921a73c2a       kube-scheduler-scheduled-stop-805745            kube-system
	8fec78605c04c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   11 seconds ago      Running             kube-controller-manager   0                   597e32d0ae215       kube-controller-manager-scheduled-stop-805745   kube-system
	
	
	==> describe nodes <==
	Name:               scheduled-stop-805745
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=scheduled-stop-805745
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=scheduled-stop-805745
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_20_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:20:18 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  scheduled-stop-805745
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:20:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:20:21 +0000   Sat, 25 Oct 2025 10:20:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:20:21 +0000   Sat, 25 Oct 2025 10:20:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:20:21 +0000   Sat, 25 Oct 2025 10:20:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 25 Oct 2025 10:20:21 +0000   Sat, 25 Oct 2025 10:20:14 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    scheduled-stop-805745
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                cdb34633-2754-4327-b84a-dfa353315126
	  Boot ID:                    3729fb0b-b441-4078-a4f7-ae0fb40e9fa4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                             ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-scheduled-stop-805745                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4s
	  kube-system                 kube-apiserver-scheduled-stop-805745             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-controller-manager-scheduled-stop-805745    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-scheduler-scheduled-stop-805745             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%)  0 (0%)
	  memory             100Mi (1%)  0 (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12s (x8 over 12s)  kubelet          Node scheduled-stop-805745 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12s (x8 over 12s)  kubelet          Node scheduled-stop-805745 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12s (x8 over 12s)  kubelet          Node scheduled-stop-805745 status is now: NodeHasSufficientPID
	  Normal   Starting                 4s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 4s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4s                 kubelet          Node scheduled-stop-805745 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4s                 kubelet          Node scheduled-stop-805745 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4s                 kubelet          Node scheduled-stop-805745 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           0s                 node-controller  Node scheduled-stop-805745 event: Registered Node scheduled-stop-805745 in Controller
	
	
	==> dmesg <==
	[Oct25 09:57] overlayfs: idmapped layers are currently not supported
	[Oct25 09:58] overlayfs: idmapped layers are currently not supported
	[Oct25 09:59] overlayfs: idmapped layers are currently not supported
	[  +3.114756] overlayfs: idmapped layers are currently not supported
	[Oct25 10:00] overlayfs: idmapped layers are currently not supported
	[Oct25 10:01] overlayfs: idmapped layers are currently not supported
	[Oct25 10:02] overlayfs: idmapped layers are currently not supported
	[  +3.771525] overlayfs: idmapped layers are currently not supported
	[ +47.892456] overlayfs: idmapped layers are currently not supported
	[Oct25 10:03] overlayfs: idmapped layers are currently not supported
	[Oct25 10:04] overlayfs: idmapped layers are currently not supported
	[Oct25 10:09] overlayfs: idmapped layers are currently not supported
	[ +31.156008] overlayfs: idmapped layers are currently not supported
	[Oct25 10:11] overlayfs: idmapped layers are currently not supported
	[Oct25 10:12] overlayfs: idmapped layers are currently not supported
	[Oct25 10:13] overlayfs: idmapped layers are currently not supported
	[Oct25 10:14] overlayfs: idmapped layers are currently not supported
	[Oct25 10:15] overlayfs: idmapped layers are currently not supported
	[ +16.057450] overlayfs: idmapped layers are currently not supported
	[Oct25 10:16] overlayfs: idmapped layers are currently not supported
	[ +24.917476] overlayfs: idmapped layers are currently not supported
	[Oct25 10:17] overlayfs: idmapped layers are currently not supported
	[ +27.290615] overlayfs: idmapped layers are currently not supported
	[Oct25 10:19] overlayfs: idmapped layers are currently not supported
	[Oct25 10:20] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [dee92fab30b55ddac2150d9dcb73766d6213dc757aa6a620e34df401959f9676] <==
	{"level":"warn","ts":"2025-10-25T10:20:17.113248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:17.139511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:17.167296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:17.202393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:17.227093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:17.244624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:17.279924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:17.296796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:17.316503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:17.353263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:17.378355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:17.423203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:17.449277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:17.487418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:17.531460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:17.532918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:17.551529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:17.577304Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:17.607636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:17.611582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:17.634810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:17.675463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:17.691367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:17.729879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:20:17.851248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41684","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:20:25 up  2:02,  0 user,  load average: 1.82, 1.85, 2.10
	Linux scheduled-stop-805745 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [066afb3f4a3cfc98b91e9993ce426e31e80e9a02352c5361fbad8cb0aaff9c35] <==
	I1025 10:20:18.744996       1 aggregator.go:171] initial CRD sync complete...
	I1025 10:20:18.745015       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 10:20:18.745022       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 10:20:18.745028       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:20:18.745044       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 10:20:18.745067       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	E1025 10:20:18.750397       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1025 10:20:18.753636       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1025 10:20:18.767110       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:20:18.770636       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:20:18.770702       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 10:20:18.957979       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:20:19.439860       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 10:20:19.447859       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 10:20:19.448059       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:20:20.202818       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:20:20.263489       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:20:20.358263       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 10:20:20.365962       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1025 10:20:20.367072       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:20:20.372157       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:20:20.700411       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:20:21.336591       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:20:21.358605       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 10:20:21.368921       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [8fec78605c04cb76e11cfb51689670a4b220fadd9ad8663f7d4c28f30c687c6b] <==
	I1025 10:20:25.745758       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 10:20:25.746889       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 10:20:25.746920       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 10:20:25.746949       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 10:20:25.747014       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:20:25.747026       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 10:20:25.747038       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 10:20:25.747078       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 10:20:25.747014       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1025 10:20:25.746966       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 10:20:25.747024       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 10:20:25.747544       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 10:20:25.749672       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 10:20:25.752351       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 10:20:25.753073       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1025 10:20:25.753120       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1025 10:20:25.753138       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1025 10:20:25.753142       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1025 10:20:25.753147       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1025 10:20:25.753335       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 10:20:25.757292       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 10:20:25.762021       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:20:25.768155       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="scheduled-stop-805745" podCIDRs=["10.244.0.0/24"]
	I1025 10:20:25.787723       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:20:25.796362       1 shared_informer.go:356] "Caches are synced" controller="service account"
	
	
	==> kube-scheduler [ad9ecdf7a85d3d5ce31920e9434bd5cf90b6062ac20660405103e74b6f52dc9c] <==
	E1025 10:20:18.720823       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 10:20:18.720921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 10:20:18.721076       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 10:20:18.721174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 10:20:18.721276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1025 10:20:18.724692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 10:20:18.724811       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 10:20:18.724905       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 10:20:18.725128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 10:20:18.725186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 10:20:18.725234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 10:20:18.726659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 10:20:19.538394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 10:20:19.563203       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 10:20:19.613172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 10:20:19.699990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 10:20:19.789537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 10:20:19.822470       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 10:20:19.824871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 10:20:19.827653       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 10:20:19.861057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 10:20:19.885409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 10:20:19.917514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 10:20:20.015459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1025 10:20:21.809484       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:20:21 scheduled-stop-805745 kubelet[1320]: I1025 10:20:21.592412    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97ce2da4da7f16260f82bb09490699ce-usr-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-805745\" (UID: \"97ce2da4da7f16260f82bb09490699ce\") " pod="kube-system/kube-apiserver-scheduled-stop-805745"
	Oct 25 10:20:21 scheduled-stop-805745 kubelet[1320]: I1025 10:20:21.592491    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6c344fa20362a4957e8f97f0df0bcf96-k8s-certs\") pod \"kube-controller-manager-scheduled-stop-805745\" (UID: \"6c344fa20362a4957e8f97f0df0bcf96\") " pod="kube-system/kube-controller-manager-scheduled-stop-805745"
	Oct 25 10:20:21 scheduled-stop-805745 kubelet[1320]: I1025 10:20:21.592586    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6c344fa20362a4957e8f97f0df0bcf96-kubeconfig\") pod \"kube-controller-manager-scheduled-stop-805745\" (UID: \"6c344fa20362a4957e8f97f0df0bcf96\") " pod="kube-system/kube-controller-manager-scheduled-stop-805745"
	Oct 25 10:20:21 scheduled-stop-805745 kubelet[1320]: I1025 10:20:21.592682    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6c344fa20362a4957e8f97f0df0bcf96-usr-share-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-805745\" (UID: \"6c344fa20362a4957e8f97f0df0bcf96\") " pod="kube-system/kube-controller-manager-scheduled-stop-805745"
	Oct 25 10:20:21 scheduled-stop-805745 kubelet[1320]: I1025 10:20:21.592768    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97ce2da4da7f16260f82bb09490699ce-ca-certs\") pod \"kube-apiserver-scheduled-stop-805745\" (UID: \"97ce2da4da7f16260f82bb09490699ce\") " pod="kube-system/kube-apiserver-scheduled-stop-805745"
	Oct 25 10:20:21 scheduled-stop-805745 kubelet[1320]: I1025 10:20:21.592843    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6c344fa20362a4957e8f97f0df0bcf96-ca-certs\") pod \"kube-controller-manager-scheduled-stop-805745\" (UID: \"6c344fa20362a4957e8f97f0df0bcf96\") " pod="kube-system/kube-controller-manager-scheduled-stop-805745"
	Oct 25 10:20:21 scheduled-stop-805745 kubelet[1320]: I1025 10:20:21.592925    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e125c682c0c29745acbe18e130343d2f-kubeconfig\") pod \"kube-scheduler-scheduled-stop-805745\" (UID: \"e125c682c0c29745acbe18e130343d2f\") " pod="kube-system/kube-scheduler-scheduled-stop-805745"
	Oct 25 10:20:21 scheduled-stop-805745 kubelet[1320]: I1025 10:20:21.593015    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/4307d37eb853ae4a499dfad200370d22-etcd-certs\") pod \"etcd-scheduled-stop-805745\" (UID: \"4307d37eb853ae4a499dfad200370d22\") " pod="kube-system/etcd-scheduled-stop-805745"
	Oct 25 10:20:21 scheduled-stop-805745 kubelet[1320]: I1025 10:20:21.593096    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6c344fa20362a4957e8f97f0df0bcf96-etc-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-805745\" (UID: \"6c344fa20362a4957e8f97f0df0bcf96\") " pod="kube-system/kube-controller-manager-scheduled-stop-805745"
	Oct 25 10:20:21 scheduled-stop-805745 kubelet[1320]: I1025 10:20:21.593177    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6c344fa20362a4957e8f97f0df0bcf96-flexvolume-dir\") pod \"kube-controller-manager-scheduled-stop-805745\" (UID: \"6c344fa20362a4957e8f97f0df0bcf96\") " pod="kube-system/kube-controller-manager-scheduled-stop-805745"
	Oct 25 10:20:21 scheduled-stop-805745 kubelet[1320]: I1025 10:20:21.593279    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97ce2da4da7f16260f82bb09490699ce-k8s-certs\") pod \"kube-apiserver-scheduled-stop-805745\" (UID: \"97ce2da4da7f16260f82bb09490699ce\") " pod="kube-system/kube-apiserver-scheduled-stop-805745"
	Oct 25 10:20:21 scheduled-stop-805745 kubelet[1320]: I1025 10:20:21.593370    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97ce2da4da7f16260f82bb09490699ce-usr-local-share-ca-certificates\") pod \"kube-apiserver-scheduled-stop-805745\" (UID: \"97ce2da4da7f16260f82bb09490699ce\") " pod="kube-system/kube-apiserver-scheduled-stop-805745"
	Oct 25 10:20:21 scheduled-stop-805745 kubelet[1320]: I1025 10:20:21.593446    1320 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6c344fa20362a4957e8f97f0df0bcf96-usr-local-share-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-805745\" (UID: \"6c344fa20362a4957e8f97f0df0bcf96\") " pod="kube-system/kube-controller-manager-scheduled-stop-805745"
	Oct 25 10:20:21 scheduled-stop-805745 kubelet[1320]: E1025 10:20:21.603981    1320 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-scheduled-stop-805745\" already exists" pod="kube-system/kube-apiserver-scheduled-stop-805745"
	Oct 25 10:20:21 scheduled-stop-805745 kubelet[1320]: E1025 10:20:21.603982    1320 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-scheduled-stop-805745\" already exists" pod="kube-system/kube-scheduler-scheduled-stop-805745"
	Oct 25 10:20:22 scheduled-stop-805745 kubelet[1320]: I1025 10:20:22.247527    1320 apiserver.go:52] "Watching apiserver"
	Oct 25 10:20:22 scheduled-stop-805745 kubelet[1320]: I1025 10:20:22.290851    1320 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 25 10:20:22 scheduled-stop-805745 kubelet[1320]: I1025 10:20:22.405076    1320 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-scheduled-stop-805745"
	Oct 25 10:20:22 scheduled-stop-805745 kubelet[1320]: E1025 10:20:22.425574    1320 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-scheduled-stop-805745\" already exists" pod="kube-system/etcd-scheduled-stop-805745"
	Oct 25 10:20:22 scheduled-stop-805745 kubelet[1320]: I1025 10:20:22.453565    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-scheduled-stop-805745" podStartSLOduration=1.453544743 podStartE2EDuration="1.453544743s" podCreationTimestamp="2025-10-25 10:20:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:20:22.424901999 +0000 UTC m=+1.274629280" watchObservedRunningTime="2025-10-25 10:20:22.453544743 +0000 UTC m=+1.303272007"
	Oct 25 10:20:22 scheduled-stop-805745 kubelet[1320]: I1025 10:20:22.514191    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-scheduled-stop-805745" podStartSLOduration=1.5141719120000001 podStartE2EDuration="1.514171912s" podCreationTimestamp="2025-10-25 10:20:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:20:22.513982535 +0000 UTC m=+1.363709890" watchObservedRunningTime="2025-10-25 10:20:22.514171912 +0000 UTC m=+1.363899185"
	Oct 25 10:20:22 scheduled-stop-805745 kubelet[1320]: I1025 10:20:22.514457    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-scheduled-stop-805745" podStartSLOduration=2.514444539 podStartE2EDuration="2.514444539s" podCreationTimestamp="2025-10-25 10:20:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:20:22.453933499 +0000 UTC m=+1.303660796" watchObservedRunningTime="2025-10-25 10:20:22.514444539 +0000 UTC m=+1.364171804"
	Oct 25 10:20:22 scheduled-stop-805745 kubelet[1320]: I1025 10:20:22.542079    1320 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-scheduled-stop-805745" podStartSLOduration=1.5420612839999999 podStartE2EDuration="1.542061284s" podCreationTimestamp="2025-10-25 10:20:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:20:22.540920715 +0000 UTC m=+1.390647988" watchObservedRunningTime="2025-10-25 10:20:22.542061284 +0000 UTC m=+1.391788549"
	Oct 25 10:20:25 scheduled-stop-805745 kubelet[1320]: I1025 10:20:25.774395    1320 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 25 10:20:25 scheduled-stop-805745 kubelet[1320]: I1025 10:20:25.775669    1320 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p scheduled-stop-805745 -n scheduled-stop-805745
helpers_test.go:269: (dbg) Run:  kubectl --context scheduled-stop-805745 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: kindnet-4z659 kube-proxy-vgtc9 storage-provisioner
helpers_test.go:282: ======> post-mortem[TestScheduledStopUnix]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context scheduled-stop-805745 describe pod kindnet-4z659 kube-proxy-vgtc9 storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context scheduled-stop-805745 describe pod kindnet-4z659 kube-proxy-vgtc9 storage-provisioner: exit status 1 (146.68574ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "kindnet-4z659" not found
	Error from server (NotFound): pods "kube-proxy-vgtc9" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context scheduled-stop-805745 describe pod kindnet-4z659 kube-proxy-vgtc9 storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "scheduled-stop-805745" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-805745
E1025 10:20:28.677917  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-805745: (2.137474973s)
--- FAIL: TestScheduledStopUnix (40.02s)

                                                
                                    
x
+
TestPause/serial/Pause (7.31s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-598105 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-598105 --alsologtostderr -v=5: exit status 80 (2.462093384s)

                                                
                                                
-- stdout --
	* Pausing node pause-598105 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 10:26:26.293232  456088 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:26:26.294221  456088 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:26:26.294258  456088 out.go:374] Setting ErrFile to fd 2...
	I1025 10:26:26.294281  456088 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:26:26.294559  456088 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 10:26:26.294831  456088 out.go:368] Setting JSON to false
	I1025 10:26:26.294889  456088 mustload.go:65] Loading cluster: pause-598105
	I1025 10:26:26.295917  456088 config.go:182] Loaded profile config "pause-598105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:26:26.296447  456088 cli_runner.go:164] Run: docker container inspect pause-598105 --format={{.State.Status}}
	I1025 10:26:26.316611  456088 host.go:66] Checking if "pause-598105" exists ...
	I1025 10:26:26.316935  456088 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:26:26.373127  456088 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-25 10:26:26.364120267 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:26:26.373768  456088 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-598105 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1025 10:26:26.376738  456088 out.go:179] * Pausing node pause-598105 ... 
	I1025 10:26:26.380400  456088 host.go:66] Checking if "pause-598105" exists ...
	I1025 10:26:26.380728  456088 ssh_runner.go:195] Run: systemctl --version
	I1025 10:26:26.380777  456088 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-598105
	I1025 10:26:26.401340  456088 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33397 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/pause-598105/id_rsa Username:docker}
	I1025 10:26:26.506056  456088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:26:26.526187  456088 pause.go:52] kubelet running: true
	I1025 10:26:26.526353  456088 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:26:26.722459  456088 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:26:26.722543  456088 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:26:26.794623  456088 cri.go:89] found id: "e19bbf1f4eed5c911d637106fa9f035c92b7e0210f15222f9a9123d09622c90b"
	I1025 10:26:26.794646  456088 cri.go:89] found id: "54304becc9c9b39308914a5ecf29a26df5b8f4c7e53eb4b23245f4903e1707bd"
	I1025 10:26:26.794651  456088 cri.go:89] found id: "7a7371c136a2c320fe9995dc5d7d2acffc0c01af23cb2964be84fe31334bb21f"
	I1025 10:26:26.794654  456088 cri.go:89] found id: "bf9ced8f087cd758796eeb7bbb83271e0303a80f6596c05bec23f83162420ca2"
	I1025 10:26:26.794670  456088 cri.go:89] found id: "12e15ac9eaae3cc1b3ada04843bd01215519f12759752badde06818b40df81d2"
	I1025 10:26:26.794674  456088 cri.go:89] found id: "140627d5948e8c0ffd2c884620b8491eace7b98a1096b00a577032f0de164d71"
	I1025 10:26:26.794677  456088 cri.go:89] found id: "567f9d3b15fc7f8cdc27eaa6ffd20100bea3644c1a3142d939c47487ce7d1d1d"
	I1025 10:26:26.794680  456088 cri.go:89] found id: "c76e4fa5a72c206c0ecc32c7257efd336d9e589076154db2e02e1a902dbd47e2"
	I1025 10:26:26.794683  456088 cri.go:89] found id: "a687e0761f69eb04b9cc7dc6eac7e0f7b9d90ee67f58072fdc0880f4eb52fd5b"
	I1025 10:26:26.794690  456088 cri.go:89] found id: "f8ac7113586ad0c3590df29847e589f83cd8305959fe654e545a7ddbbe0b00eb"
	I1025 10:26:26.794694  456088 cri.go:89] found id: "7cfc933f8e7401faf3db3a012b28cf8021fdfdc23609781c8432a46243927082"
	I1025 10:26:26.794697  456088 cri.go:89] found id: "c030a64bfba8184fa15e4b55946874e6f550fdb94f2645fe0fe5f83cc5c1445b"
	I1025 10:26:26.794703  456088 cri.go:89] found id: "f988c3e30953fb9f8538ab5b9751389b5fea271fe794156bb9301f708ecaac3a"
	I1025 10:26:26.794707  456088 cri.go:89] found id: "40f252c2f93511d0cf5758ba10d102806b290bc96aa42501a78a329b693e52c4"
	I1025 10:26:26.794710  456088 cri.go:89] found id: ""
	I1025 10:26:26.794765  456088 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:26:26.807056  456088 retry.go:31] will retry after 371.650741ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:26:26Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:26:27.179669  456088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:26:27.194276  456088 pause.go:52] kubelet running: false
	I1025 10:26:27.194368  456088 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:26:27.393605  456088 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:26:27.393689  456088 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:26:27.510257  456088 cri.go:89] found id: "e19bbf1f4eed5c911d637106fa9f035c92b7e0210f15222f9a9123d09622c90b"
	I1025 10:26:27.510275  456088 cri.go:89] found id: "54304becc9c9b39308914a5ecf29a26df5b8f4c7e53eb4b23245f4903e1707bd"
	I1025 10:26:27.510280  456088 cri.go:89] found id: "7a7371c136a2c320fe9995dc5d7d2acffc0c01af23cb2964be84fe31334bb21f"
	I1025 10:26:27.510283  456088 cri.go:89] found id: "bf9ced8f087cd758796eeb7bbb83271e0303a80f6596c05bec23f83162420ca2"
	I1025 10:26:27.510287  456088 cri.go:89] found id: "12e15ac9eaae3cc1b3ada04843bd01215519f12759752badde06818b40df81d2"
	I1025 10:26:27.510290  456088 cri.go:89] found id: "140627d5948e8c0ffd2c884620b8491eace7b98a1096b00a577032f0de164d71"
	I1025 10:26:27.510294  456088 cri.go:89] found id: "567f9d3b15fc7f8cdc27eaa6ffd20100bea3644c1a3142d939c47487ce7d1d1d"
	I1025 10:26:27.510297  456088 cri.go:89] found id: "c76e4fa5a72c206c0ecc32c7257efd336d9e589076154db2e02e1a902dbd47e2"
	I1025 10:26:27.510299  456088 cri.go:89] found id: "a687e0761f69eb04b9cc7dc6eac7e0f7b9d90ee67f58072fdc0880f4eb52fd5b"
	I1025 10:26:27.510306  456088 cri.go:89] found id: "f8ac7113586ad0c3590df29847e589f83cd8305959fe654e545a7ddbbe0b00eb"
	I1025 10:26:27.510309  456088 cri.go:89] found id: "7cfc933f8e7401faf3db3a012b28cf8021fdfdc23609781c8432a46243927082"
	I1025 10:26:27.510312  456088 cri.go:89] found id: "c030a64bfba8184fa15e4b55946874e6f550fdb94f2645fe0fe5f83cc5c1445b"
	I1025 10:26:27.510315  456088 cri.go:89] found id: "f988c3e30953fb9f8538ab5b9751389b5fea271fe794156bb9301f708ecaac3a"
	I1025 10:26:27.510318  456088 cri.go:89] found id: "40f252c2f93511d0cf5758ba10d102806b290bc96aa42501a78a329b693e52c4"
	I1025 10:26:27.510321  456088 cri.go:89] found id: ""
	I1025 10:26:27.510384  456088 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:26:27.522222  456088 retry.go:31] will retry after 411.624918ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:26:27Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:26:27.934821  456088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:26:27.947767  456088 pause.go:52] kubelet running: false
	I1025 10:26:27.947829  456088 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:26:28.094015  456088 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:26:28.094094  456088 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:26:28.159096  456088 cri.go:89] found id: "e19bbf1f4eed5c911d637106fa9f035c92b7e0210f15222f9a9123d09622c90b"
	I1025 10:26:28.159122  456088 cri.go:89] found id: "54304becc9c9b39308914a5ecf29a26df5b8f4c7e53eb4b23245f4903e1707bd"
	I1025 10:26:28.159127  456088 cri.go:89] found id: "7a7371c136a2c320fe9995dc5d7d2acffc0c01af23cb2964be84fe31334bb21f"
	I1025 10:26:28.159131  456088 cri.go:89] found id: "bf9ced8f087cd758796eeb7bbb83271e0303a80f6596c05bec23f83162420ca2"
	I1025 10:26:28.159135  456088 cri.go:89] found id: "12e15ac9eaae3cc1b3ada04843bd01215519f12759752badde06818b40df81d2"
	I1025 10:26:28.159139  456088 cri.go:89] found id: "140627d5948e8c0ffd2c884620b8491eace7b98a1096b00a577032f0de164d71"
	I1025 10:26:28.159177  456088 cri.go:89] found id: "567f9d3b15fc7f8cdc27eaa6ffd20100bea3644c1a3142d939c47487ce7d1d1d"
	I1025 10:26:28.159181  456088 cri.go:89] found id: "c76e4fa5a72c206c0ecc32c7257efd336d9e589076154db2e02e1a902dbd47e2"
	I1025 10:26:28.159185  456088 cri.go:89] found id: "a687e0761f69eb04b9cc7dc6eac7e0f7b9d90ee67f58072fdc0880f4eb52fd5b"
	I1025 10:26:28.159195  456088 cri.go:89] found id: "f8ac7113586ad0c3590df29847e589f83cd8305959fe654e545a7ddbbe0b00eb"
	I1025 10:26:28.159202  456088 cri.go:89] found id: "7cfc933f8e7401faf3db3a012b28cf8021fdfdc23609781c8432a46243927082"
	I1025 10:26:28.159205  456088 cri.go:89] found id: "c030a64bfba8184fa15e4b55946874e6f550fdb94f2645fe0fe5f83cc5c1445b"
	I1025 10:26:28.159208  456088 cri.go:89] found id: "f988c3e30953fb9f8538ab5b9751389b5fea271fe794156bb9301f708ecaac3a"
	I1025 10:26:28.159213  456088 cri.go:89] found id: "40f252c2f93511d0cf5758ba10d102806b290bc96aa42501a78a329b693e52c4"
	I1025 10:26:28.159218  456088 cri.go:89] found id: ""
	I1025 10:26:28.159270  456088 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:26:28.173874  456088 out.go:203] 
	W1025 10:26:28.176896  456088 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:26:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:26:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 10:26:28.176920  456088 out.go:285] * 
	* 
	W1025 10:26:28.693036  456088 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 10:26:28.696185  456088 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-598105 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-598105
helpers_test.go:243: (dbg) docker inspect pause-598105:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7ac8227c606816bdda73938cbff51778d2dfa1d3f862a01d67154d03e82037a9",
	        "Created": "2025-10-25T10:24:41.376840239Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 449937,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:24:41.443125084Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/7ac8227c606816bdda73938cbff51778d2dfa1d3f862a01d67154d03e82037a9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7ac8227c606816bdda73938cbff51778d2dfa1d3f862a01d67154d03e82037a9/hostname",
	        "HostsPath": "/var/lib/docker/containers/7ac8227c606816bdda73938cbff51778d2dfa1d3f862a01d67154d03e82037a9/hosts",
	        "LogPath": "/var/lib/docker/containers/7ac8227c606816bdda73938cbff51778d2dfa1d3f862a01d67154d03e82037a9/7ac8227c606816bdda73938cbff51778d2dfa1d3f862a01d67154d03e82037a9-json.log",
	        "Name": "/pause-598105",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-598105:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-598105",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7ac8227c606816bdda73938cbff51778d2dfa1d3f862a01d67154d03e82037a9",
	                "LowerDir": "/var/lib/docker/overlay2/bb69e24e0e50222ba98e8f411cf8b463f98cf01f37f37e09d8381aecdf0e2a4a-init/diff:/var/lib/docker/overlay2/56244b4f60319ba406f5ebe406da3f45bd5764c167111669ae2d84794cf94518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bb69e24e0e50222ba98e8f411cf8b463f98cf01f37f37e09d8381aecdf0e2a4a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bb69e24e0e50222ba98e8f411cf8b463f98cf01f37f37e09d8381aecdf0e2a4a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bb69e24e0e50222ba98e8f411cf8b463f98cf01f37f37e09d8381aecdf0e2a4a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-598105",
	                "Source": "/var/lib/docker/volumes/pause-598105/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-598105",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-598105",
	                "name.minikube.sigs.k8s.io": "pause-598105",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "25cff9f3863d49a8056c76e564466dd9b76ecd41ce771e0315b8510499a78f0d",
	            "SandboxKey": "/var/run/docker/netns/25cff9f3863d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33397"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33398"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33401"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33399"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33400"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-598105": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:45:95:e5:dc:7e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "45375a1d630ca21e9d5e39c177795db37f449d933054370e1fa920adf3c027d9",
	                    "EndpointID": "624292d51ce4b0d4a90ac2f8dc0270c107cd617bce6b1f24bd54fb1011666386",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-598105",
	                        "7ac8227c6068"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-598105 -n pause-598105
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-598105 -n pause-598105: exit status 2 (338.091959ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-598105 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-598105 logs -n 25: (1.461992387s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-704940 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                            │ NoKubernetes-704940       │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ start   │ -p NoKubernetes-704940 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-704940       │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:21 UTC │
	│ start   │ -p missing-upgrade-353666 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-353666    │ jenkins │ v1.32.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:21 UTC │
	│ start   │ -p NoKubernetes-704940 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-704940       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ delete  │ -p NoKubernetes-704940                                                                                                                   │ NoKubernetes-704940       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ start   │ -p NoKubernetes-704940 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-704940       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ ssh     │ -p NoKubernetes-704940 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-704940       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │                     │
	│ start   │ -p missing-upgrade-353666 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-353666    │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:22 UTC │
	│ stop    │ -p NoKubernetes-704940                                                                                                                   │ NoKubernetes-704940       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ start   │ -p NoKubernetes-704940 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-704940       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ ssh     │ -p NoKubernetes-704940 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-704940       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │                     │
	│ delete  │ -p NoKubernetes-704940                                                                                                                   │ NoKubernetes-704940       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:22 UTC │
	│ start   │ -p kubernetes-upgrade-845331 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-845331 │ jenkins │ v1.37.0 │ 25 Oct 25 10:22 UTC │ 25 Oct 25 10:22 UTC │
	│ delete  │ -p missing-upgrade-353666                                                                                                                │ missing-upgrade-353666    │ jenkins │ v1.37.0 │ 25 Oct 25 10:22 UTC │ 25 Oct 25 10:22 UTC │
	│ start   │ -p stopped-upgrade-853068 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-853068    │ jenkins │ v1.32.0 │ 25 Oct 25 10:22 UTC │ 25 Oct 25 10:23 UTC │
	│ start   │ -p kubernetes-upgrade-845331 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-845331 │ jenkins │ v1.37.0 │ 25 Oct 25 10:22 UTC │                     │
	│ stop    │ stopped-upgrade-853068 stop                                                                                                              │ stopped-upgrade-853068    │ jenkins │ v1.32.0 │ 25 Oct 25 10:23 UTC │ 25 Oct 25 10:23 UTC │
	│ start   │ -p stopped-upgrade-853068 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-853068    │ jenkins │ v1.37.0 │ 25 Oct 25 10:23 UTC │ 25 Oct 25 10:23 UTC │
	│ delete  │ -p stopped-upgrade-853068                                                                                                                │ stopped-upgrade-853068    │ jenkins │ v1.37.0 │ 25 Oct 25 10:23 UTC │ 25 Oct 25 10:23 UTC │
	│ start   │ -p running-upgrade-567548 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-567548    │ jenkins │ v1.32.0 │ 25 Oct 25 10:23 UTC │ 25 Oct 25 10:24 UTC │
	│ start   │ -p running-upgrade-567548 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-567548    │ jenkins │ v1.37.0 │ 25 Oct 25 10:24 UTC │ 25 Oct 25 10:24 UTC │
	│ delete  │ -p running-upgrade-567548                                                                                                                │ running-upgrade-567548    │ jenkins │ v1.37.0 │ 25 Oct 25 10:24 UTC │ 25 Oct 25 10:24 UTC │
	│ start   │ -p pause-598105 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-598105              │ jenkins │ v1.37.0 │ 25 Oct 25 10:24 UTC │ 25 Oct 25 10:25 UTC │
	│ start   │ -p pause-598105 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-598105              │ jenkins │ v1.37.0 │ 25 Oct 25 10:25 UTC │ 25 Oct 25 10:26 UTC │
	│ pause   │ -p pause-598105 --alsologtostderr -v=5                                                                                                   │ pause-598105              │ jenkins │ v1.37.0 │ 25 Oct 25 10:26 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:25:58
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:25:58.673660  454182 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:25:58.673838  454182 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:25:58.673868  454182 out.go:374] Setting ErrFile to fd 2...
	I1025 10:25:58.673888  454182 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:25:58.674245  454182 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 10:25:58.674666  454182 out.go:368] Setting JSON to false
	I1025 10:25:58.675981  454182 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7709,"bootTime":1761380250,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1025 10:25:58.676087  454182 start.go:141] virtualization:  
	I1025 10:25:58.679207  454182 out.go:179] * [pause-598105] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:25:58.683300  454182 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 10:25:58.683415  454182 notify.go:220] Checking for updates...
	I1025 10:25:58.689570  454182 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:25:58.692617  454182 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:25:58.695533  454182 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-292167/.minikube
	I1025 10:25:58.698428  454182 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:25:58.701335  454182 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:25:58.704769  454182 config.go:182] Loaded profile config "pause-598105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:25:58.705324  454182 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:25:58.734489  454182 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:25:58.734611  454182 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:25:58.799018  454182 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-25 10:25:58.789583652 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:25:58.799168  454182 docker.go:318] overlay module found
	I1025 10:25:58.802389  454182 out.go:179] * Using the docker driver based on existing profile
	I1025 10:25:58.805317  454182 start.go:305] selected driver: docker
	I1025 10:25:58.805338  454182 start.go:925] validating driver "docker" against &{Name:pause-598105 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-598105 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:25:58.805475  454182 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:25:58.805585  454182 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:25:58.868444  454182 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-25 10:25:58.859468956 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:25:58.868869  454182 cni.go:84] Creating CNI manager for ""
	I1025 10:25:58.868940  454182 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:25:58.868994  454182 start.go:349] cluster config:
	{Name:pause-598105 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-598105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:25:58.874046  454182 out.go:179] * Starting "pause-598105" primary control-plane node in "pause-598105" cluster
	I1025 10:25:58.876871  454182 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:25:58.879850  454182 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:25:58.882792  454182 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:25:58.882876  454182 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 10:25:58.882886  454182 cache.go:58] Caching tarball of preloaded images
	I1025 10:25:58.882926  454182 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:25:58.883027  454182 preload.go:233] Found /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:25:58.883043  454182 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:25:58.883235  454182 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/pause-598105/config.json ...
	I1025 10:25:58.902344  454182 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:25:58.902370  454182 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:25:58.902391  454182 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:25:58.902416  454182 start.go:360] acquireMachinesLock for pause-598105: {Name:mk7275af11579743c9d1d77cd490c241a80c1ffa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:25:58.902471  454182 start.go:364] duration metric: took 37.572µs to acquireMachinesLock for "pause-598105"
	I1025 10:25:58.902496  454182 start.go:96] Skipping create...Using existing machine configuration
	I1025 10:25:58.902502  454182 fix.go:54] fixHost starting: 
	I1025 10:25:58.902769  454182 cli_runner.go:164] Run: docker container inspect pause-598105 --format={{.State.Status}}
	I1025 10:25:58.936359  454182 fix.go:112] recreateIfNeeded on pause-598105: state=Running err=<nil>
	W1025 10:25:58.936388  454182 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 10:25:55.873222  438892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:25:55.884427  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:25:55.884516  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:25:55.927204  438892 cri.go:89] found id: ""
	I1025 10:25:55.927247  438892 logs.go:282] 0 containers: []
	W1025 10:25:55.927266  438892 logs.go:284] No container was found matching "kube-apiserver"
	I1025 10:25:55.927274  438892 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:25:55.927348  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:25:55.964946  438892 cri.go:89] found id: ""
	I1025 10:25:55.964972  438892 logs.go:282] 0 containers: []
	W1025 10:25:55.964981  438892 logs.go:284] No container was found matching "etcd"
	I1025 10:25:55.964987  438892 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:25:55.965060  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:25:55.994528  438892 cri.go:89] found id: ""
	I1025 10:25:55.994556  438892 logs.go:282] 0 containers: []
	W1025 10:25:55.994565  438892 logs.go:284] No container was found matching "coredns"
	I1025 10:25:55.994572  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:25:55.994636  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:25:56.023599  438892 cri.go:89] found id: ""
	I1025 10:25:56.023623  438892 logs.go:282] 0 containers: []
	W1025 10:25:56.023632  438892 logs.go:284] No container was found matching "kube-scheduler"
	I1025 10:25:56.023639  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:25:56.023698  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:25:56.052674  438892 cri.go:89] found id: ""
	I1025 10:25:56.052703  438892 logs.go:282] 0 containers: []
	W1025 10:25:56.052713  438892 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:25:56.052720  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:25:56.052779  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:25:56.082456  438892 cri.go:89] found id: ""
	I1025 10:25:56.082483  438892 logs.go:282] 0 containers: []
	W1025 10:25:56.082494  438892 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 10:25:56.082501  438892 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:25:56.082560  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:25:56.108696  438892 cri.go:89] found id: ""
	I1025 10:25:56.108722  438892 logs.go:282] 0 containers: []
	W1025 10:25:56.108731  438892 logs.go:284] No container was found matching "kindnet"
	I1025 10:25:56.108737  438892 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:25:56.108795  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:25:56.136778  438892 cri.go:89] found id: ""
	I1025 10:25:56.136858  438892 logs.go:282] 0 containers: []
	W1025 10:25:56.136874  438892 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:25:56.136884  438892 logs.go:123] Gathering logs for kubelet ...
	I1025 10:25:56.136901  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:25:56.257022  438892 logs.go:123] Gathering logs for dmesg ...
	I1025 10:25:56.257059  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:25:56.273028  438892 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:25:56.273056  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:25:56.337738  438892 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:25:56.337761  438892 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:25:56.337784  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:25:56.374950  438892 logs.go:123] Gathering logs for container status ...
	I1025 10:25:56.374984  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 10:25:58.917033  438892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:25:58.930997  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:25:58.931071  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:25:58.977248  438892 cri.go:89] found id: ""
	I1025 10:25:58.977272  438892 logs.go:282] 0 containers: []
	W1025 10:25:58.977281  438892 logs.go:284] No container was found matching "kube-apiserver"
	I1025 10:25:58.977288  438892 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:25:58.977354  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:25:59.008408  438892 cri.go:89] found id: ""
	I1025 10:25:59.008429  438892 logs.go:282] 0 containers: []
	W1025 10:25:59.008438  438892 logs.go:284] No container was found matching "etcd"
	I1025 10:25:59.008445  438892 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:25:59.008508  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:25:59.045488  438892 cri.go:89] found id: ""
	I1025 10:25:59.045510  438892 logs.go:282] 0 containers: []
	W1025 10:25:59.045519  438892 logs.go:284] No container was found matching "coredns"
	I1025 10:25:59.045525  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:25:59.045582  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:25:59.077820  438892 cri.go:89] found id: ""
	I1025 10:25:59.077843  438892 logs.go:282] 0 containers: []
	W1025 10:25:59.077851  438892 logs.go:284] No container was found matching "kube-scheduler"
	I1025 10:25:59.077857  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:25:59.077924  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:25:59.105802  438892 cri.go:89] found id: ""
	I1025 10:25:59.105825  438892 logs.go:282] 0 containers: []
	W1025 10:25:59.105833  438892 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:25:59.105839  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:25:59.105897  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:25:59.141324  438892 cri.go:89] found id: ""
	I1025 10:25:59.141415  438892 logs.go:282] 0 containers: []
	W1025 10:25:59.141427  438892 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 10:25:59.141435  438892 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:25:59.141516  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:25:59.176002  438892 cri.go:89] found id: ""
	I1025 10:25:59.176025  438892 logs.go:282] 0 containers: []
	W1025 10:25:59.176033  438892 logs.go:284] No container was found matching "kindnet"
	I1025 10:25:59.176039  438892 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:25:59.176097  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:25:59.232504  438892 cri.go:89] found id: ""
	I1025 10:25:59.232525  438892 logs.go:282] 0 containers: []
	W1025 10:25:59.232533  438892 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:25:59.232542  438892 logs.go:123] Gathering logs for kubelet ...
	I1025 10:25:59.232553  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:25:59.377757  438892 logs.go:123] Gathering logs for dmesg ...
	I1025 10:25:59.377835  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:25:59.395665  438892 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:25:59.395740  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:25:59.478555  438892 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:25:59.478585  438892 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:25:59.478597  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:25:59.517317  438892 logs.go:123] Gathering logs for container status ...
	I1025 10:25:59.517353  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 10:25:58.940710  454182 out.go:252] * Updating the running docker "pause-598105" container ...
	I1025 10:25:58.940747  454182 machine.go:93] provisionDockerMachine start ...
	I1025 10:25:58.940836  454182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-598105
	I1025 10:25:58.959746  454182 main.go:141] libmachine: Using SSH client type: native
	I1025 10:25:58.960077  454182 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33397 <nil> <nil>}
	I1025 10:25:58.960093  454182 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:25:59.123379  454182 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-598105
	
	I1025 10:25:59.123415  454182 ubuntu.go:182] provisioning hostname "pause-598105"
	I1025 10:25:59.123480  454182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-598105
	I1025 10:25:59.147226  454182 main.go:141] libmachine: Using SSH client type: native
	I1025 10:25:59.147533  454182 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33397 <nil> <nil>}
	I1025 10:25:59.147545  454182 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-598105 && echo "pause-598105" | sudo tee /etc/hostname
	I1025 10:25:59.325360  454182 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-598105
	
	I1025 10:25:59.325437  454182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-598105
	I1025 10:25:59.347660  454182 main.go:141] libmachine: Using SSH client type: native
	I1025 10:25:59.347968  454182 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33397 <nil> <nil>}
	I1025 10:25:59.347986  454182 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-598105' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-598105/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-598105' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:25:59.512624  454182 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:25:59.512647  454182 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-292167/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-292167/.minikube}
	I1025 10:25:59.512671  454182 ubuntu.go:190] setting up certificates
	I1025 10:25:59.512681  454182 provision.go:84] configureAuth start
	I1025 10:25:59.512748  454182 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-598105
	I1025 10:25:59.535547  454182 provision.go:143] copyHostCerts
	I1025 10:25:59.535610  454182 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem, removing ...
	I1025 10:25:59.535634  454182 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem
	I1025 10:25:59.535715  454182 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem (1675 bytes)
	I1025 10:25:59.535818  454182 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem, removing ...
	I1025 10:25:59.535830  454182 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem
	I1025 10:25:59.535860  454182 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem (1082 bytes)
	I1025 10:25:59.535924  454182 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem, removing ...
	I1025 10:25:59.535933  454182 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem
	I1025 10:25:59.535958  454182 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem (1123 bytes)
	I1025 10:25:59.536020  454182 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem org=jenkins.pause-598105 san=[127.0.0.1 192.168.85.2 localhost minikube pause-598105]
	I1025 10:25:59.667515  454182 provision.go:177] copyRemoteCerts
	I1025 10:25:59.667584  454182 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:25:59.667633  454182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-598105
	I1025 10:25:59.689598  454182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33397 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/pause-598105/id_rsa Username:docker}
	I1025 10:25:59.794972  454182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 10:25:59.812515  454182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1025 10:25:59.830616  454182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:25:59.849677  454182 provision.go:87] duration metric: took 336.972364ms to configureAuth
	I1025 10:25:59.849748  454182 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:25:59.849988  454182 config.go:182] Loaded profile config "pause-598105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:25:59.850110  454182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-598105
	I1025 10:25:59.867250  454182 main.go:141] libmachine: Using SSH client type: native
	I1025 10:25:59.867551  454182 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33397 <nil> <nil>}
	I1025 10:25:59.867572  454182 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:26:02.088582  438892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:26:02.099126  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:26:02.099217  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:26:02.129137  438892 cri.go:89] found id: ""
	I1025 10:26:02.129168  438892 logs.go:282] 0 containers: []
	W1025 10:26:02.129177  438892 logs.go:284] No container was found matching "kube-apiserver"
	I1025 10:26:02.129185  438892 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:26:02.129244  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:26:02.158327  438892 cri.go:89] found id: ""
	I1025 10:26:02.158354  438892 logs.go:282] 0 containers: []
	W1025 10:26:02.158362  438892 logs.go:284] No container was found matching "etcd"
	I1025 10:26:02.158369  438892 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:26:02.158429  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:26:02.183796  438892 cri.go:89] found id: ""
	I1025 10:26:02.183824  438892 logs.go:282] 0 containers: []
	W1025 10:26:02.183833  438892 logs.go:284] No container was found matching "coredns"
	I1025 10:26:02.183842  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:26:02.183920  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:26:02.214150  438892 cri.go:89] found id: ""
	I1025 10:26:02.214266  438892 logs.go:282] 0 containers: []
	W1025 10:26:02.214280  438892 logs.go:284] No container was found matching "kube-scheduler"
	I1025 10:26:02.214287  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:26:02.214353  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:26:02.246010  438892 cri.go:89] found id: ""
	I1025 10:26:02.246036  438892 logs.go:282] 0 containers: []
	W1025 10:26:02.246044  438892 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:26:02.246051  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:26:02.246114  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:26:02.272221  438892 cri.go:89] found id: ""
	I1025 10:26:02.272248  438892 logs.go:282] 0 containers: []
	W1025 10:26:02.272258  438892 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 10:26:02.272264  438892 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:26:02.272326  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:26:02.297128  438892 cri.go:89] found id: ""
	I1025 10:26:02.297206  438892 logs.go:282] 0 containers: []
	W1025 10:26:02.297229  438892 logs.go:284] No container was found matching "kindnet"
	I1025 10:26:02.297248  438892 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:26:02.297319  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:26:02.320647  438892 cri.go:89] found id: ""
	I1025 10:26:02.320672  438892 logs.go:282] 0 containers: []
	W1025 10:26:02.320691  438892 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:26:02.320715  438892 logs.go:123] Gathering logs for kubelet ...
	I1025 10:26:02.320735  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:26:02.436710  438892 logs.go:123] Gathering logs for dmesg ...
	I1025 10:26:02.436795  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:26:02.453297  438892 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:26:02.453332  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:26:02.526489  438892 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:26:02.526564  438892 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:26:02.526585  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:26:02.563093  438892 logs.go:123] Gathering logs for container status ...
	I1025 10:26:02.563128  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 10:26:05.347103  454182 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:26:05.347127  454182 machine.go:96] duration metric: took 6.406371294s to provisionDockerMachine
	I1025 10:26:05.347137  454182 start.go:293] postStartSetup for "pause-598105" (driver="docker")
	I1025 10:26:05.347175  454182 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:26:05.347238  454182 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:26:05.347295  454182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-598105
	I1025 10:26:05.372342  454182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33397 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/pause-598105/id_rsa Username:docker}
	I1025 10:26:05.481828  454182 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:26:05.485543  454182 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:26:05.485582  454182 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:26:05.485595  454182 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/addons for local assets ...
	I1025 10:26:05.485651  454182 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/files for local assets ...
	I1025 10:26:05.485728  454182 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem -> 2940172.pem in /etc/ssl/certs
	I1025 10:26:05.485847  454182 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:26:05.497798  454182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 10:26:05.515671  454182 start.go:296] duration metric: took 168.518473ms for postStartSetup
	I1025 10:26:05.515758  454182 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:26:05.515812  454182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-598105
	I1025 10:26:05.547320  454182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33397 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/pause-598105/id_rsa Username:docker}
	I1025 10:26:05.660655  454182 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:26:05.667633  454182 fix.go:56] duration metric: took 6.765125417s for fixHost
	I1025 10:26:05.667653  454182 start.go:83] releasing machines lock for "pause-598105", held for 6.765168789s
	I1025 10:26:05.667719  454182 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-598105
	I1025 10:26:05.685302  454182 ssh_runner.go:195] Run: cat /version.json
	I1025 10:26:05.685347  454182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-598105
	I1025 10:26:05.685592  454182 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:26:05.685640  454182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-598105
	I1025 10:26:05.715723  454182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33397 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/pause-598105/id_rsa Username:docker}
	I1025 10:26:05.732411  454182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33397 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/pause-598105/id_rsa Username:docker}
	I1025 10:26:05.827004  454182 ssh_runner.go:195] Run: systemctl --version
	I1025 10:26:05.919175  454182 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:26:05.958355  454182 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:26:05.962684  454182 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:26:05.962755  454182 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:26:05.971094  454182 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:26:05.971190  454182 start.go:495] detecting cgroup driver to use...
	I1025 10:26:05.971238  454182 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:26:05.971310  454182 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:26:05.986424  454182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:26:05.999727  454182 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:26:05.999794  454182 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:26:06.018201  454182 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:26:06.032399  454182 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:26:06.173711  454182 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:26:06.311729  454182 docker.go:234] disabling docker service ...
	I1025 10:26:06.311860  454182 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:26:06.328180  454182 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:26:06.342228  454182 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:26:06.477580  454182 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:26:06.612403  454182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:26:06.625857  454182 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:26:06.640620  454182 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:26:06.640688  454182 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:26:06.649694  454182 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:26:06.649768  454182 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:26:06.659032  454182 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:26:06.668431  454182 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:26:06.678868  454182 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:26:06.687432  454182 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:26:06.696649  454182 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:26:06.704648  454182 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:26:06.713417  454182 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:26:06.720960  454182 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:26:06.728225  454182 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:26:06.857036  454182 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:26:07.013524  454182 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:26:07.013646  454182 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:26:07.017552  454182 start.go:563] Will wait 60s for crictl version
	I1025 10:26:07.017658  454182 ssh_runner.go:195] Run: which crictl
	I1025 10:26:07.021405  454182 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:26:07.046498  454182 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:26:07.046642  454182 ssh_runner.go:195] Run: crio --version
	I1025 10:26:07.080689  454182 ssh_runner.go:195] Run: crio --version
	I1025 10:26:07.112207  454182 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:26:07.115209  454182 cli_runner.go:164] Run: docker network inspect pause-598105 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:26:07.130417  454182 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1025 10:26:07.134290  454182 kubeadm.go:883] updating cluster {Name:pause-598105 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-598105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:26:07.134432  454182 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:26:07.134506  454182 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:26:07.172289  454182 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:26:07.172315  454182 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:26:07.172371  454182 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:26:07.197361  454182 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:26:07.197388  454182 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:26:07.197396  454182 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1025 10:26:07.197506  454182 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-598105 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-598105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:26:07.197588  454182 ssh_runner.go:195] Run: crio config
	I1025 10:26:07.249157  454182 cni.go:84] Creating CNI manager for ""
	I1025 10:26:07.249184  454182 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:26:07.249203  454182 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:26:07.249226  454182 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-598105 NodeName:pause-598105 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:26:07.249357  454182 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-598105"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:26:07.249435  454182 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:26:07.257536  454182 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:26:07.257688  454182 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:26:07.265427  454182 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1025 10:26:07.278093  454182 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:26:07.291715  454182 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1025 10:26:07.304957  454182 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:26:07.308799  454182 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:26:07.444572  454182 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:26:07.458543  454182 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/pause-598105 for IP: 192.168.85.2
	I1025 10:26:07.458616  454182 certs.go:195] generating shared ca certs ...
	I1025 10:26:07.458646  454182 certs.go:227] acquiring lock for ca certs: {Name:mk9438893ca511bbb3aa6154ee7b6f94b409696d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:26:07.458809  454182 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key
	I1025 10:26:07.458894  454182 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key
	I1025 10:26:07.458929  454182 certs.go:257] generating profile certs ...
	I1025 10:26:07.459048  454182 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/pause-598105/client.key
	I1025 10:26:07.459243  454182 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/pause-598105/apiserver.key.b50a4ea9
	I1025 10:26:07.459325  454182 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/pause-598105/proxy-client.key
	I1025 10:26:07.459462  454182 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem (1338 bytes)
	W1025 10:26:07.459504  454182 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017_empty.pem, impossibly tiny 0 bytes
	I1025 10:26:07.459517  454182 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 10:26:07.459540  454182 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem (1082 bytes)
	I1025 10:26:07.459572  454182 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:26:07.459596  454182 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem (1675 bytes)
	I1025 10:26:07.459643  454182 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 10:26:07.460256  454182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:26:07.478556  454182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:26:07.496895  454182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:26:07.515405  454182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:26:07.532635  454182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/pause-598105/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1025 10:26:07.549408  454182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/pause-598105/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:26:07.567234  454182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/pause-598105/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:26:07.584090  454182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/pause-598105/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 10:26:07.602678  454182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem --> /usr/share/ca-certificates/294017.pem (1338 bytes)
	I1025 10:26:07.638865  454182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /usr/share/ca-certificates/2940172.pem (1708 bytes)
	I1025 10:26:07.676812  454182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:26:07.699264  454182 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:26:07.723755  454182 ssh_runner.go:195] Run: openssl version
	I1025 10:26:07.731569  454182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940172.pem && ln -fs /usr/share/ca-certificates/2940172.pem /etc/ssl/certs/2940172.pem"
	I1025 10:26:07.744578  454182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940172.pem
	I1025 10:26:07.760783  454182 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:39 /usr/share/ca-certificates/2940172.pem
	I1025 10:26:07.760900  454182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940172.pem
	I1025 10:26:07.856417  454182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940172.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:26:07.879924  454182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:26:07.898982  454182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:26:07.908694  454182 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:26:07.908760  454182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:26:07.978444  454182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:26:07.991711  454182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294017.pem && ln -fs /usr/share/ca-certificates/294017.pem /etc/ssl/certs/294017.pem"
	I1025 10:26:08.005176  454182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294017.pem
	I1025 10:26:08.012137  454182 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:39 /usr/share/ca-certificates/294017.pem
	I1025 10:26:08.012296  454182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294017.pem
	I1025 10:26:08.080141  454182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294017.pem /etc/ssl/certs/51391683.0"
	I1025 10:26:08.091343  454182 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:26:08.098797  454182 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:26:08.160585  454182 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:26:08.209805  454182 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:26:08.261473  454182 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:26:08.343939  454182 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:26:08.434887  454182 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:26:08.506599  454182 kubeadm.go:400] StartCluster: {Name:pause-598105 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-598105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:26:08.506733  454182 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:26:08.506797  454182 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:26:08.552725  454182 cri.go:89] found id: "e19bbf1f4eed5c911d637106fa9f035c92b7e0210f15222f9a9123d09622c90b"
	I1025 10:26:08.552748  454182 cri.go:89] found id: "54304becc9c9b39308914a5ecf29a26df5b8f4c7e53eb4b23245f4903e1707bd"
	I1025 10:26:08.552755  454182 cri.go:89] found id: "7a7371c136a2c320fe9995dc5d7d2acffc0c01af23cb2964be84fe31334bb21f"
	I1025 10:26:08.552759  454182 cri.go:89] found id: "bf9ced8f087cd758796eeb7bbb83271e0303a80f6596c05bec23f83162420ca2"
	I1025 10:26:08.552762  454182 cri.go:89] found id: "12e15ac9eaae3cc1b3ada04843bd01215519f12759752badde06818b40df81d2"
	I1025 10:26:08.552766  454182 cri.go:89] found id: "140627d5948e8c0ffd2c884620b8491eace7b98a1096b00a577032f0de164d71"
	I1025 10:26:08.552770  454182 cri.go:89] found id: "567f9d3b15fc7f8cdc27eaa6ffd20100bea3644c1a3142d939c47487ce7d1d1d"
	I1025 10:26:08.552773  454182 cri.go:89] found id: "c76e4fa5a72c206c0ecc32c7257efd336d9e589076154db2e02e1a902dbd47e2"
	I1025 10:26:08.552776  454182 cri.go:89] found id: "a687e0761f69eb04b9cc7dc6eac7e0f7b9d90ee67f58072fdc0880f4eb52fd5b"
	I1025 10:26:08.552783  454182 cri.go:89] found id: "f8ac7113586ad0c3590df29847e589f83cd8305959fe654e545a7ddbbe0b00eb"
	I1025 10:26:08.552787  454182 cri.go:89] found id: "7cfc933f8e7401faf3db3a012b28cf8021fdfdc23609781c8432a46243927082"
	I1025 10:26:08.552790  454182 cri.go:89] found id: "c030a64bfba8184fa15e4b55946874e6f550fdb94f2645fe0fe5f83cc5c1445b"
	I1025 10:26:08.552793  454182 cri.go:89] found id: "f988c3e30953fb9f8538ab5b9751389b5fea271fe794156bb9301f708ecaac3a"
	I1025 10:26:08.552796  454182 cri.go:89] found id: "40f252c2f93511d0cf5758ba10d102806b290bc96aa42501a78a329b693e52c4"
	I1025 10:26:08.552800  454182 cri.go:89] found id: ""
	I1025 10:26:08.552850  454182 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 10:26:08.575120  454182 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:26:08Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:26:08.575271  454182 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:26:08.636238  454182 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:26:08.636259  454182 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:26:08.636313  454182 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:26:08.674174  454182 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:26:08.674789  454182 kubeconfig.go:125] found "pause-598105" server: "https://192.168.85.2:8443"
	I1025 10:26:08.675628  454182 kapi.go:59] client config for pause-598105: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21794-292167/.minikube/profiles/pause-598105/client.crt", KeyFile:"/home/jenkins/minikube-integration/21794-292167/.minikube/profiles/pause-598105/client.key", CAFile:"/home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 10:26:08.676465  454182 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1025 10:26:08.676488  454182 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1025 10:26:08.676495  454182 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1025 10:26:08.676501  454182 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1025 10:26:08.676506  454182 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1025 10:26:08.676910  454182 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:26:08.719697  454182 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1025 10:26:08.719731  454182 kubeadm.go:601] duration metric: took 83.465114ms to restartPrimaryControlPlane
	I1025 10:26:08.719740  454182 kubeadm.go:402] duration metric: took 213.152022ms to StartCluster
	I1025 10:26:08.719756  454182 settings.go:142] acquiring lock: {Name:mkea67a33832f2ea491a4c745ccd836174c61655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:26:08.719820  454182 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:26:08.720691  454182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/kubeconfig: {Name:mk3dba620aff9c31a3afd43d9db4d9ff5be75367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:26:08.720913  454182 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:26:08.721205  454182 config.go:182] Loaded profile config "pause-598105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:26:08.721252  454182 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:26:08.724449  454182 out.go:179] * Verifying Kubernetes components...
	I1025 10:26:08.724540  454182 out.go:179] * Enabled addons: 
	I1025 10:26:05.094924  438892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:26:05.105739  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:26:05.105810  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:26:05.132770  438892 cri.go:89] found id: ""
	I1025 10:26:05.132793  438892 logs.go:282] 0 containers: []
	W1025 10:26:05.132801  438892 logs.go:284] No container was found matching "kube-apiserver"
	I1025 10:26:05.132809  438892 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:26:05.132872  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:26:05.189682  438892 cri.go:89] found id: ""
	I1025 10:26:05.189704  438892 logs.go:282] 0 containers: []
	W1025 10:26:05.189715  438892 logs.go:284] No container was found matching "etcd"
	I1025 10:26:05.189722  438892 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:26:05.189780  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:26:05.234293  438892 cri.go:89] found id: ""
	I1025 10:26:05.234316  438892 logs.go:282] 0 containers: []
	W1025 10:26:05.234324  438892 logs.go:284] No container was found matching "coredns"
	I1025 10:26:05.234331  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:26:05.234387  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:26:05.273455  438892 cri.go:89] found id: ""
	I1025 10:26:05.273477  438892 logs.go:282] 0 containers: []
	W1025 10:26:05.273486  438892 logs.go:284] No container was found matching "kube-scheduler"
	I1025 10:26:05.273493  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:26:05.273550  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:26:05.309427  438892 cri.go:89] found id: ""
	I1025 10:26:05.309451  438892 logs.go:282] 0 containers: []
	W1025 10:26:05.309464  438892 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:26:05.309471  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:26:05.309535  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:26:05.344324  438892 cri.go:89] found id: ""
	I1025 10:26:05.344355  438892 logs.go:282] 0 containers: []
	W1025 10:26:05.344365  438892 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 10:26:05.344372  438892 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:26:05.344432  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:26:05.395716  438892 cri.go:89] found id: ""
	I1025 10:26:05.395740  438892 logs.go:282] 0 containers: []
	W1025 10:26:05.395749  438892 logs.go:284] No container was found matching "kindnet"
	I1025 10:26:05.395757  438892 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:26:05.395813  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:26:05.430107  438892 cri.go:89] found id: ""
	I1025 10:26:05.430129  438892 logs.go:282] 0 containers: []
	W1025 10:26:05.430137  438892 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:26:05.430146  438892 logs.go:123] Gathering logs for kubelet ...
	I1025 10:26:05.430157  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:26:05.565974  438892 logs.go:123] Gathering logs for dmesg ...
	I1025 10:26:05.566012  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:26:05.585008  438892 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:26:05.585040  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:26:05.666792  438892 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:26:05.666824  438892 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:26:05.666836  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:26:05.713065  438892 logs.go:123] Gathering logs for container status ...
	I1025 10:26:05.713102  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 10:26:08.269252  438892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:26:08.287057  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:26:08.287129  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:26:08.341333  438892 cri.go:89] found id: ""
	I1025 10:26:08.341360  438892 logs.go:282] 0 containers: []
	W1025 10:26:08.341369  438892 logs.go:284] No container was found matching "kube-apiserver"
	I1025 10:26:08.341376  438892 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:26:08.341432  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:26:08.380433  438892 cri.go:89] found id: ""
	I1025 10:26:08.380455  438892 logs.go:282] 0 containers: []
	W1025 10:26:08.380463  438892 logs.go:284] No container was found matching "etcd"
	I1025 10:26:08.380470  438892 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:26:08.380531  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:26:08.423944  438892 cri.go:89] found id: ""
	I1025 10:26:08.423972  438892 logs.go:282] 0 containers: []
	W1025 10:26:08.423981  438892 logs.go:284] No container was found matching "coredns"
	I1025 10:26:08.423987  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:26:08.424050  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:26:08.470515  438892 cri.go:89] found id: ""
	I1025 10:26:08.470543  438892 logs.go:282] 0 containers: []
	W1025 10:26:08.470551  438892 logs.go:284] No container was found matching "kube-scheduler"
	I1025 10:26:08.470558  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:26:08.470620  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:26:08.521211  438892 cri.go:89] found id: ""
	I1025 10:26:08.521240  438892 logs.go:282] 0 containers: []
	W1025 10:26:08.521251  438892 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:26:08.521259  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:26:08.521319  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:26:08.565492  438892 cri.go:89] found id: ""
	I1025 10:26:08.565514  438892 logs.go:282] 0 containers: []
	W1025 10:26:08.565524  438892 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 10:26:08.565531  438892 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:26:08.565587  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:26:08.604673  438892 cri.go:89] found id: ""
	I1025 10:26:08.604696  438892 logs.go:282] 0 containers: []
	W1025 10:26:08.604705  438892 logs.go:284] No container was found matching "kindnet"
	I1025 10:26:08.604712  438892 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:26:08.604767  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:26:08.672466  438892 cri.go:89] found id: ""
	I1025 10:26:08.672488  438892 logs.go:282] 0 containers: []
	W1025 10:26:08.672502  438892 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:26:08.672513  438892 logs.go:123] Gathering logs for dmesg ...
	I1025 10:26:08.672526  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:26:08.705144  438892 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:26:08.705173  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:26:08.836525  438892 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:26:08.836543  438892 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:26:08.836557  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:26:08.898412  438892 logs.go:123] Gathering logs for container status ...
	I1025 10:26:08.898492  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 10:26:08.953850  438892 logs.go:123] Gathering logs for kubelet ...
	I1025 10:26:08.953879  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:26:08.727467  454182 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:26:08.727635  454182 addons.go:514] duration metric: took 6.376254ms for enable addons: enabled=[]
	I1025 10:26:09.107711  454182 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:26:09.151766  454182 node_ready.go:35] waiting up to 6m0s for node "pause-598105" to be "Ready" ...
	I1025 10:26:12.762851  454182 node_ready.go:49] node "pause-598105" is "Ready"
	I1025 10:26:12.762883  454182 node_ready.go:38] duration metric: took 3.611071663s for node "pause-598105" to be "Ready" ...
	I1025 10:26:12.762898  454182 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:26:12.762956  454182 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:26:12.782805  454182 api_server.go:72] duration metric: took 4.061856579s to wait for apiserver process to appear ...
	I1025 10:26:12.782828  454182 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:26:12.782847  454182 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 10:26:12.829275  454182 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:26:12.829305  454182 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:26:13.283954  454182 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 10:26:13.292169  454182 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:26:13.292198  454182 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:26:11.650889  438892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:26:11.664251  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:26:11.664324  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:26:11.705607  438892 cri.go:89] found id: ""
	I1025 10:26:11.705646  438892 logs.go:282] 0 containers: []
	W1025 10:26:11.705656  438892 logs.go:284] No container was found matching "kube-apiserver"
	I1025 10:26:11.705663  438892 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:26:11.705726  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:26:11.765578  438892 cri.go:89] found id: ""
	I1025 10:26:11.765606  438892 logs.go:282] 0 containers: []
	W1025 10:26:11.765616  438892 logs.go:284] No container was found matching "etcd"
	I1025 10:26:11.765622  438892 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:26:11.765714  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:26:11.813395  438892 cri.go:89] found id: ""
	I1025 10:26:11.813425  438892 logs.go:282] 0 containers: []
	W1025 10:26:11.813434  438892 logs.go:284] No container was found matching "coredns"
	I1025 10:26:11.813441  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:26:11.813500  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:26:11.862668  438892 cri.go:89] found id: ""
	I1025 10:26:11.862696  438892 logs.go:282] 0 containers: []
	W1025 10:26:11.862706  438892 logs.go:284] No container was found matching "kube-scheduler"
	I1025 10:26:11.862712  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:26:11.862772  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:26:11.908455  438892 cri.go:89] found id: ""
	I1025 10:26:11.908491  438892 logs.go:282] 0 containers: []
	W1025 10:26:11.908501  438892 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:26:11.908508  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:26:11.908578  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:26:11.973748  438892 cri.go:89] found id: ""
	I1025 10:26:11.973776  438892 logs.go:282] 0 containers: []
	W1025 10:26:11.973793  438892 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 10:26:11.973800  438892 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:26:11.973868  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:26:12.030510  438892 cri.go:89] found id: ""
	I1025 10:26:12.030548  438892 logs.go:282] 0 containers: []
	W1025 10:26:12.030557  438892 logs.go:284] No container was found matching "kindnet"
	I1025 10:26:12.030565  438892 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:26:12.030636  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:26:12.070253  438892 cri.go:89] found id: ""
	I1025 10:26:12.070282  438892 logs.go:282] 0 containers: []
	W1025 10:26:12.070300  438892 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:26:12.070309  438892 logs.go:123] Gathering logs for kubelet ...
	I1025 10:26:12.070321  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:26:12.208728  438892 logs.go:123] Gathering logs for dmesg ...
	I1025 10:26:12.208766  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:26:12.225009  438892 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:26:12.225041  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:26:12.332313  438892 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:26:12.332347  438892 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:26:12.332360  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:26:12.370675  438892 logs.go:123] Gathering logs for container status ...
	I1025 10:26:12.370716  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 10:26:13.783005  454182 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 10:26:13.791319  454182 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1025 10:26:13.792479  454182 api_server.go:141] control plane version: v1.34.1
	I1025 10:26:13.792505  454182 api_server.go:131] duration metric: took 1.009669596s to wait for apiserver health ...
	I1025 10:26:13.792516  454182 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:26:13.795473  454182 system_pods.go:59] 7 kube-system pods found
	I1025 10:26:13.795515  454182 system_pods.go:61] "coredns-66bc5c9577-mwxxc" [5aef0e38-29c2-4dbc-b75d-96b9454113b4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:26:13.795551  454182 system_pods.go:61] "etcd-pause-598105" [5dab1e0a-ec54-4063-807f-d9eb06f2d9b7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:26:13.795566  454182 system_pods.go:61] "kindnet-x2zhm" [183c57f8-d19b-4e10-b018-d0518418dc4e] Running
	I1025 10:26:13.795575  454182 system_pods.go:61] "kube-apiserver-pause-598105" [3c26c8e4-11aa-4c5a-8b0e-7fdbd046f314] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:26:13.795588  454182 system_pods.go:61] "kube-controller-manager-pause-598105" [ca780a17-ece9-4aed-aae6-59a73940fcd4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:26:13.795594  454182 system_pods.go:61] "kube-proxy-gg7cn" [8afd0eec-98cc-4d94-ac83-e6734161aea0] Running
	I1025 10:26:13.795607  454182 system_pods.go:61] "kube-scheduler-pause-598105" [be0d2fb9-d287-48c8-8d9d-9e9f20a30f13] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:26:13.795637  454182 system_pods.go:74] duration metric: took 3.113345ms to wait for pod list to return data ...
	I1025 10:26:13.795662  454182 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:26:13.798527  454182 default_sa.go:45] found service account: "default"
	I1025 10:26:13.798593  454182 default_sa.go:55] duration metric: took 2.922778ms for default service account to be created ...
	I1025 10:26:13.798609  454182 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:26:13.801439  454182 system_pods.go:86] 7 kube-system pods found
	I1025 10:26:13.801475  454182 system_pods.go:89] "coredns-66bc5c9577-mwxxc" [5aef0e38-29c2-4dbc-b75d-96b9454113b4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:26:13.801485  454182 system_pods.go:89] "etcd-pause-598105" [5dab1e0a-ec54-4063-807f-d9eb06f2d9b7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:26:13.801490  454182 system_pods.go:89] "kindnet-x2zhm" [183c57f8-d19b-4e10-b018-d0518418dc4e] Running
	I1025 10:26:13.801497  454182 system_pods.go:89] "kube-apiserver-pause-598105" [3c26c8e4-11aa-4c5a-8b0e-7fdbd046f314] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:26:13.801504  454182 system_pods.go:89] "kube-controller-manager-pause-598105" [ca780a17-ece9-4aed-aae6-59a73940fcd4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:26:13.801513  454182 system_pods.go:89] "kube-proxy-gg7cn" [8afd0eec-98cc-4d94-ac83-e6734161aea0] Running
	I1025 10:26:13.801520  454182 system_pods.go:89] "kube-scheduler-pause-598105" [be0d2fb9-d287-48c8-8d9d-9e9f20a30f13] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:26:13.801540  454182 system_pods.go:126] duration metric: took 2.924731ms to wait for k8s-apps to be running ...
	I1025 10:26:13.801548  454182 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:26:13.801603  454182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:26:13.814689  454182 system_svc.go:56] duration metric: took 13.131721ms WaitForService to wait for kubelet
	I1025 10:26:13.814720  454182 kubeadm.go:586] duration metric: took 5.093774904s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:26:13.814739  454182 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:26:13.817763  454182 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:26:13.817799  454182 node_conditions.go:123] node cpu capacity is 2
	I1025 10:26:13.817812  454182 node_conditions.go:105] duration metric: took 3.06774ms to run NodePressure ...
	I1025 10:26:13.817825  454182 start.go:241] waiting for startup goroutines ...
	I1025 10:26:13.817833  454182 start.go:246] waiting for cluster config update ...
	I1025 10:26:13.817842  454182 start.go:255] writing updated cluster config ...
	I1025 10:26:13.818198  454182 ssh_runner.go:195] Run: rm -f paused
	I1025 10:26:13.821830  454182 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:26:13.822507  454182 kapi.go:59] client config for pause-598105: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21794-292167/.minikube/profiles/pause-598105/client.crt", KeyFile:"/home/jenkins/minikube-integration/21794-292167/.minikube/profiles/pause-598105/client.key", CAFile:"/home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 10:26:13.826310  454182 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mwxxc" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 10:26:15.832723  454182 pod_ready.go:104] pod "coredns-66bc5c9577-mwxxc" is not "Ready", error: <nil>
	W1025 10:26:18.340789  454182 pod_ready.go:104] pod "coredns-66bc5c9577-mwxxc" is not "Ready", error: <nil>
	I1025 10:26:14.939620  438892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:26:14.949980  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:26:14.950069  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:26:14.980983  438892 cri.go:89] found id: ""
	I1025 10:26:14.981019  438892 logs.go:282] 0 containers: []
	W1025 10:26:14.981028  438892 logs.go:284] No container was found matching "kube-apiserver"
	I1025 10:26:14.981035  438892 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:26:14.981096  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:26:15.010768  438892 cri.go:89] found id: ""
	I1025 10:26:15.011228  438892 logs.go:282] 0 containers: []
	W1025 10:26:15.011282  438892 logs.go:284] No container was found matching "etcd"
	I1025 10:26:15.011311  438892 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:26:15.011450  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:26:15.055501  438892 cri.go:89] found id: ""
	I1025 10:26:15.055589  438892 logs.go:282] 0 containers: []
	W1025 10:26:15.055615  438892 logs.go:284] No container was found matching "coredns"
	I1025 10:26:15.055645  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:26:15.055741  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:26:15.088984  438892 cri.go:89] found id: ""
	I1025 10:26:15.089013  438892 logs.go:282] 0 containers: []
	W1025 10:26:15.089024  438892 logs.go:284] No container was found matching "kube-scheduler"
	I1025 10:26:15.089031  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:26:15.089093  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:26:15.117598  438892 cri.go:89] found id: ""
	I1025 10:26:15.117621  438892 logs.go:282] 0 containers: []
	W1025 10:26:15.117629  438892 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:26:15.117636  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:26:15.117700  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:26:15.146587  438892 cri.go:89] found id: ""
	I1025 10:26:15.146617  438892 logs.go:282] 0 containers: []
	W1025 10:26:15.146626  438892 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 10:26:15.146634  438892 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:26:15.146696  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:26:15.173772  438892 cri.go:89] found id: ""
	I1025 10:26:15.173841  438892 logs.go:282] 0 containers: []
	W1025 10:26:15.173864  438892 logs.go:284] No container was found matching "kindnet"
	I1025 10:26:15.173887  438892 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:26:15.173976  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:26:15.200614  438892 cri.go:89] found id: ""
	I1025 10:26:15.200640  438892 logs.go:282] 0 containers: []
	W1025 10:26:15.200660  438892 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:26:15.200670  438892 logs.go:123] Gathering logs for kubelet ...
	I1025 10:26:15.200682  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:26:15.321323  438892 logs.go:123] Gathering logs for dmesg ...
	I1025 10:26:15.321357  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:26:15.340245  438892 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:26:15.340276  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:26:15.420172  438892 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:26:15.420265  438892 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:26:15.420287  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:26:15.459593  438892 logs.go:123] Gathering logs for container status ...
	I1025 10:26:15.459627  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 10:26:17.996375  438892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:26:18.008899  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:26:18.008990  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:26:18.040562  438892 cri.go:89] found id: ""
	I1025 10:26:18.040595  438892 logs.go:282] 0 containers: []
	W1025 10:26:18.040605  438892 logs.go:284] No container was found matching "kube-apiserver"
	I1025 10:26:18.040612  438892 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:26:18.040676  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:26:18.067943  438892 cri.go:89] found id: ""
	I1025 10:26:18.067970  438892 logs.go:282] 0 containers: []
	W1025 10:26:18.067981  438892 logs.go:284] No container was found matching "etcd"
	I1025 10:26:18.067988  438892 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:26:18.068046  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:26:18.094973  438892 cri.go:89] found id: ""
	I1025 10:26:18.094998  438892 logs.go:282] 0 containers: []
	W1025 10:26:18.095007  438892 logs.go:284] No container was found matching "coredns"
	I1025 10:26:18.095014  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:26:18.095069  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:26:18.121106  438892 cri.go:89] found id: ""
	I1025 10:26:18.121132  438892 logs.go:282] 0 containers: []
	W1025 10:26:18.121141  438892 logs.go:284] No container was found matching "kube-scheduler"
	I1025 10:26:18.121148  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:26:18.121211  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:26:18.147539  438892 cri.go:89] found id: ""
	I1025 10:26:18.147564  438892 logs.go:282] 0 containers: []
	W1025 10:26:18.147572  438892 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:26:18.147579  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:26:18.147635  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:26:18.174017  438892 cri.go:89] found id: ""
	I1025 10:26:18.174038  438892 logs.go:282] 0 containers: []
	W1025 10:26:18.174047  438892 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 10:26:18.174054  438892 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:26:18.174115  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:26:18.200055  438892 cri.go:89] found id: ""
	I1025 10:26:18.200082  438892 logs.go:282] 0 containers: []
	W1025 10:26:18.200099  438892 logs.go:284] No container was found matching "kindnet"
	I1025 10:26:18.200106  438892 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:26:18.200189  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:26:18.229040  438892 cri.go:89] found id: ""
	I1025 10:26:18.229065  438892 logs.go:282] 0 containers: []
	W1025 10:26:18.229075  438892 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:26:18.229084  438892 logs.go:123] Gathering logs for kubelet ...
	I1025 10:26:18.229112  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:26:18.353467  438892 logs.go:123] Gathering logs for dmesg ...
	I1025 10:26:18.353509  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:26:18.372404  438892 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:26:18.372435  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:26:18.441975  438892 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:26:18.441997  438892 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:26:18.442011  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:26:18.479846  438892 logs.go:123] Gathering logs for container status ...
	I1025 10:26:18.479881  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 10:26:19.838624  454182 pod_ready.go:94] pod "coredns-66bc5c9577-mwxxc" is "Ready"
	I1025 10:26:19.838709  454182 pod_ready.go:86] duration metric: took 6.012373986s for pod "coredns-66bc5c9577-mwxxc" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:26:19.842098  454182 pod_ready.go:83] waiting for pod "etcd-pause-598105" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:26:19.852961  454182 pod_ready.go:94] pod "etcd-pause-598105" is "Ready"
	I1025 10:26:19.852990  454182 pod_ready.go:86] duration metric: took 10.854195ms for pod "etcd-pause-598105" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:26:19.942740  454182 pod_ready.go:83] waiting for pod "kube-apiserver-pause-598105" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 10:26:21.947633  454182 pod_ready.go:104] pod "kube-apiserver-pause-598105" is not "Ready", error: <nil>
	I1025 10:26:21.011635  438892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:26:21.022296  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:26:21.022364  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:26:21.048627  438892 cri.go:89] found id: ""
	I1025 10:26:21.048650  438892 logs.go:282] 0 containers: []
	W1025 10:26:21.048659  438892 logs.go:284] No container was found matching "kube-apiserver"
	I1025 10:26:21.048666  438892 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:26:21.048727  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:26:21.079027  438892 cri.go:89] found id: ""
	I1025 10:26:21.079053  438892 logs.go:282] 0 containers: []
	W1025 10:26:21.079061  438892 logs.go:284] No container was found matching "etcd"
	I1025 10:26:21.079068  438892 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:26:21.079126  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:26:21.104193  438892 cri.go:89] found id: ""
	I1025 10:26:21.104217  438892 logs.go:282] 0 containers: []
	W1025 10:26:21.104225  438892 logs.go:284] No container was found matching "coredns"
	I1025 10:26:21.104232  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:26:21.104295  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:26:21.130491  438892 cri.go:89] found id: ""
	I1025 10:26:21.130514  438892 logs.go:282] 0 containers: []
	W1025 10:26:21.130522  438892 logs.go:284] No container was found matching "kube-scheduler"
	I1025 10:26:21.130529  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:26:21.130588  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:26:21.161651  438892 cri.go:89] found id: ""
	I1025 10:26:21.161674  438892 logs.go:282] 0 containers: []
	W1025 10:26:21.161681  438892 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:26:21.161687  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:26:21.161744  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:26:21.190105  438892 cri.go:89] found id: ""
	I1025 10:26:21.190127  438892 logs.go:282] 0 containers: []
	W1025 10:26:21.190136  438892 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 10:26:21.190142  438892 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:26:21.190203  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:26:21.216258  438892 cri.go:89] found id: ""
	I1025 10:26:21.216288  438892 logs.go:282] 0 containers: []
	W1025 10:26:21.216297  438892 logs.go:284] No container was found matching "kindnet"
	I1025 10:26:21.216304  438892 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:26:21.216361  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:26:21.246540  438892 cri.go:89] found id: ""
	I1025 10:26:21.246564  438892 logs.go:282] 0 containers: []
	W1025 10:26:21.246573  438892 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:26:21.246582  438892 logs.go:123] Gathering logs for container status ...
	I1025 10:26:21.246594  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 10:26:21.278793  438892 logs.go:123] Gathering logs for kubelet ...
	I1025 10:26:21.278822  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:26:21.400271  438892 logs.go:123] Gathering logs for dmesg ...
	I1025 10:26:21.400313  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:26:21.417089  438892 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:26:21.417117  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:26:21.496759  438892 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:26:21.496781  438892 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:26:21.496796  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:26:24.038887  438892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:26:24.049530  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:26:24.049598  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:26:24.076037  438892 cri.go:89] found id: ""
	I1025 10:26:24.076062  438892 logs.go:282] 0 containers: []
	W1025 10:26:24.076072  438892 logs.go:284] No container was found matching "kube-apiserver"
	I1025 10:26:24.076079  438892 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:26:24.076142  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:26:24.102394  438892 cri.go:89] found id: ""
	I1025 10:26:24.102420  438892 logs.go:282] 0 containers: []
	W1025 10:26:24.102437  438892 logs.go:284] No container was found matching "etcd"
	I1025 10:26:24.102460  438892 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:26:24.102555  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:26:24.128449  438892 cri.go:89] found id: ""
	I1025 10:26:24.128473  438892 logs.go:282] 0 containers: []
	W1025 10:26:24.128481  438892 logs.go:284] No container was found matching "coredns"
	I1025 10:26:24.128494  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:26:24.128575  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:26:24.158189  438892 cri.go:89] found id: ""
	I1025 10:26:24.158216  438892 logs.go:282] 0 containers: []
	W1025 10:26:24.158237  438892 logs.go:284] No container was found matching "kube-scheduler"
	I1025 10:26:24.158244  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:26:24.158340  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:26:24.184119  438892 cri.go:89] found id: ""
	I1025 10:26:24.184142  438892 logs.go:282] 0 containers: []
	W1025 10:26:24.184157  438892 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:26:24.184164  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:26:24.184225  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:26:24.209733  438892 cri.go:89] found id: ""
	I1025 10:26:24.209808  438892 logs.go:282] 0 containers: []
	W1025 10:26:24.209840  438892 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 10:26:24.209861  438892 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:26:24.209949  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:26:24.235401  438892 cri.go:89] found id: ""
	I1025 10:26:24.235482  438892 logs.go:282] 0 containers: []
	W1025 10:26:24.235514  438892 logs.go:284] No container was found matching "kindnet"
	I1025 10:26:24.235535  438892 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:26:24.235625  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:26:24.261927  438892 cri.go:89] found id: ""
	I1025 10:26:24.262005  438892 logs.go:282] 0 containers: []
	W1025 10:26:24.262027  438892 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:26:24.262052  438892 logs.go:123] Gathering logs for kubelet ...
	I1025 10:26:24.262088  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:26:24.380449  438892 logs.go:123] Gathering logs for dmesg ...
	I1025 10:26:24.380484  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:26:24.397434  438892 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:26:24.397462  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:26:24.475555  438892 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:26:24.475630  438892 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:26:24.475652  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:26:24.516468  438892 logs.go:123] Gathering logs for container status ...
	I1025 10:26:24.516509  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1025 10:26:23.948161  454182 pod_ready.go:104] pod "kube-apiserver-pause-598105" is not "Ready", error: <nil>
	I1025 10:26:25.950663  454182 pod_ready.go:94] pod "kube-apiserver-pause-598105" is "Ready"
	I1025 10:26:25.950693  454182 pod_ready.go:86] duration metric: took 6.007915697s for pod "kube-apiserver-pause-598105" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:26:25.955830  454182 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-598105" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:26:25.961412  454182 pod_ready.go:94] pod "kube-controller-manager-pause-598105" is "Ready"
	I1025 10:26:25.961436  454182 pod_ready.go:86] duration metric: took 5.577549ms for pod "kube-controller-manager-pause-598105" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:26:25.963793  454182 pod_ready.go:83] waiting for pod "kube-proxy-gg7cn" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:26:25.967983  454182 pod_ready.go:94] pod "kube-proxy-gg7cn" is "Ready"
	I1025 10:26:25.968009  454182 pod_ready.go:86] duration metric: took 4.190692ms for pod "kube-proxy-gg7cn" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:26:25.970126  454182 pod_ready.go:83] waiting for pod "kube-scheduler-pause-598105" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:26:26.147596  454182 pod_ready.go:94] pod "kube-scheduler-pause-598105" is "Ready"
	I1025 10:26:26.147624  454182 pod_ready.go:86] duration metric: took 177.473464ms for pod "kube-scheduler-pause-598105" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:26:26.147638  454182 pod_ready.go:40] duration metric: took 12.325735201s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:26:26.198871  454182 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 10:26:26.201904  454182 out.go:179] * Done! kubectl is now configured to use "pause-598105" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 10:26:07 pause-598105 crio[2053]: time="2025-10-25T10:26:07.823569575Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:26:07 pause-598105 crio[2053]: time="2025-10-25T10:26:07.829117454Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:26:07 pause-598105 crio[2053]: time="2025-10-25T10:26:07.840388196Z" level=info msg="Started container" PID=2285 containerID=bf9ced8f087cd758796eeb7bbb83271e0303a80f6596c05bec23f83162420ca2 description=kube-system/etcd-pause-598105/etcd id=83b4b790-8a7e-44a2-99a6-558b44bd4b78 name=/runtime.v1.RuntimeService/StartContainer sandboxID=25c98b76265a5863ddf43b3f25089c5d8745e2fa7946a963f98679031b53c44c
	Oct 25 10:26:07 pause-598105 crio[2053]: time="2025-10-25T10:26:07.863723461Z" level=info msg="Started container" PID=2269 containerID=12e15ac9eaae3cc1b3ada04843bd01215519f12759752badde06818b40df81d2 description=kube-system/kube-apiserver-pause-598105/kube-apiserver id=a8fb0d59-360a-418e-9bd8-3990407c34a3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=72a7523c9bda4427ce16fe00a5961e29a42f9a4660fe7d97cc661657d937ee31
	Oct 25 10:26:07 pause-598105 crio[2053]: time="2025-10-25T10:26:07.878066226Z" level=info msg="Created container 7a7371c136a2c320fe9995dc5d7d2acffc0c01af23cb2964be84fe31334bb21f: kube-system/kube-scheduler-pause-598105/kube-scheduler" id=ff264a87-8ffa-44dc-a2bb-51574fc61cf9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:26:07 pause-598105 crio[2053]: time="2025-10-25T10:26:07.888852843Z" level=info msg="Created container 54304becc9c9b39308914a5ecf29a26df5b8f4c7e53eb4b23245f4903e1707bd: kube-system/kube-controller-manager-pause-598105/kube-controller-manager" id=cf6e892d-f6ed-4b4f-b127-6bbef87fcf7b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:26:07 pause-598105 crio[2053]: time="2025-10-25T10:26:07.889428555Z" level=info msg="Starting container: 54304becc9c9b39308914a5ecf29a26df5b8f4c7e53eb4b23245f4903e1707bd" id=e622557d-47c5-41b5-bbb1-d8a2bd5df6bd name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:26:07 pause-598105 crio[2053]: time="2025-10-25T10:26:07.890219679Z" level=info msg="Starting container: 7a7371c136a2c320fe9995dc5d7d2acffc0c01af23cb2964be84fe31334bb21f" id=de2ff924-63d5-4c19-b731-29351505fffc name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:26:07 pause-598105 crio[2053]: time="2025-10-25T10:26:07.891349671Z" level=info msg="Started container" PID=2294 containerID=54304becc9c9b39308914a5ecf29a26df5b8f4c7e53eb4b23245f4903e1707bd description=kube-system/kube-controller-manager-pause-598105/kube-controller-manager id=e622557d-47c5-41b5-bbb1-d8a2bd5df6bd name=/runtime.v1.RuntimeService/StartContainer sandboxID=9ff2fd1e0fa6ec15391afa3f4a51fd19a0423c201fa7dd08396eac47e88b2576
	Oct 25 10:26:07 pause-598105 crio[2053]: time="2025-10-25T10:26:07.900270786Z" level=info msg="Started container" PID=2290 containerID=7a7371c136a2c320fe9995dc5d7d2acffc0c01af23cb2964be84fe31334bb21f description=kube-system/kube-scheduler-pause-598105/kube-scheduler id=de2ff924-63d5-4c19-b731-29351505fffc name=/runtime.v1.RuntimeService/StartContainer sandboxID=7a3b9c4ee6539819850a1d206dc5af78096846532ff976e21d6c8d4de6c99ce5
	Oct 25 10:26:08 pause-598105 crio[2053]: time="2025-10-25T10:26:08.286183953Z" level=info msg="Created container e19bbf1f4eed5c911d637106fa9f035c92b7e0210f15222f9a9123d09622c90b: kube-system/kube-proxy-gg7cn/kube-proxy" id=d2f9d6c3-ab10-4576-a29e-6c5c32b6f5db name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:26:08 pause-598105 crio[2053]: time="2025-10-25T10:26:08.288612242Z" level=info msg="Starting container: e19bbf1f4eed5c911d637106fa9f035c92b7e0210f15222f9a9123d09622c90b" id=e7c62927-ed2f-480d-90e6-4d35bd85561a name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:26:08 pause-598105 crio[2053]: time="2025-10-25T10:26:08.291621339Z" level=info msg="Started container" PID=2321 containerID=e19bbf1f4eed5c911d637106fa9f035c92b7e0210f15222f9a9123d09622c90b description=kube-system/kube-proxy-gg7cn/kube-proxy id=e7c62927-ed2f-480d-90e6-4d35bd85561a name=/runtime.v1.RuntimeService/StartContainer sandboxID=76624a180807e360eb6b737e67d1eaf7fe8a9f01233f7c7b4fd2e2247fa3662c
	Oct 25 10:26:18 pause-598105 crio[2053]: time="2025-10-25T10:26:18.006672813Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:26:18 pause-598105 crio[2053]: time="2025-10-25T10:26:18.013927306Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:26:18 pause-598105 crio[2053]: time="2025-10-25T10:26:18.014166768Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:26:18 pause-598105 crio[2053]: time="2025-10-25T10:26:18.014311516Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:26:18 pause-598105 crio[2053]: time="2025-10-25T10:26:18.022203671Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:26:18 pause-598105 crio[2053]: time="2025-10-25T10:26:18.022602404Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:26:18 pause-598105 crio[2053]: time="2025-10-25T10:26:18.022779473Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:26:18 pause-598105 crio[2053]: time="2025-10-25T10:26:18.028327105Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:26:18 pause-598105 crio[2053]: time="2025-10-25T10:26:18.028563818Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:26:18 pause-598105 crio[2053]: time="2025-10-25T10:26:18.02869968Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:26:18 pause-598105 crio[2053]: time="2025-10-25T10:26:18.033655576Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:26:18 pause-598105 crio[2053]: time="2025-10-25T10:26:18.033881974Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	e19bbf1f4eed5       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   21 seconds ago       Running             kube-proxy                1                   76624a180807e       kube-proxy-gg7cn                       kube-system
	54304becc9c9b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   21 seconds ago       Running             kube-controller-manager   1                   9ff2fd1e0fa6e       kube-controller-manager-pause-598105   kube-system
	7a7371c136a2c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   21 seconds ago       Running             kube-scheduler            1                   7a3b9c4ee6539       kube-scheduler-pause-598105            kube-system
	bf9ced8f087cd       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   21 seconds ago       Running             etcd                      1                   25c98b76265a5       etcd-pause-598105                      kube-system
	12e15ac9eaae3       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   21 seconds ago       Running             kube-apiserver            1                   72a7523c9bda4       kube-apiserver-pause-598105            kube-system
	140627d5948e8       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   22 seconds ago       Running             coredns                   1                   078c674624b0b       coredns-66bc5c9577-mwxxc               kube-system
	567f9d3b15fc7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   22 seconds ago       Running             kindnet-cni               1                   ebdae3f24c8c2       kindnet-x2zhm                          kube-system
	c76e4fa5a72c2       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   33 seconds ago       Exited              coredns                   0                   078c674624b0b       coredns-66bc5c9577-mwxxc               kube-system
	a687e0761f69e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   76624a180807e       kube-proxy-gg7cn                       kube-system
	f8ac7113586ad       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   ebdae3f24c8c2       kindnet-x2zhm                          kube-system
	7cfc933f8e740       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   7a3b9c4ee6539       kube-scheduler-pause-598105            kube-system
	c030a64bfba81       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   72a7523c9bda4       kube-apiserver-pause-598105            kube-system
	f988c3e30953f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   9ff2fd1e0fa6e       kube-controller-manager-pause-598105   kube-system
	40f252c2f9351       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   25c98b76265a5       etcd-pause-598105                      kube-system
	
	
	==> coredns [140627d5948e8c0ffd2c884620b8491eace7b98a1096b00a577032f0de164d71] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60554 - 27643 "HINFO IN 2208468797763737038.1216825629007552775. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03390533s
	
	
	==> coredns [c76e4fa5a72c206c0ecc32c7257efd336d9e589076154db2e02e1a902dbd47e2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44143 - 1580 "HINFO IN 5747158795120622237.4124721675757847013. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.003715379s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-598105
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-598105
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=pause-598105
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_25_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:25:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-598105
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:26:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:25:55 +0000   Sat, 25 Oct 2025 10:25:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:25:55 +0000   Sat, 25 Oct 2025 10:25:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:25:55 +0000   Sat, 25 Oct 2025 10:25:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:25:55 +0000   Sat, 25 Oct 2025 10:25:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-598105
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                01970c02-0799-474c-af8a-64373f40a4f6
	  Boot ID:                    3729fb0b-b441-4078-a4f7-ae0fb40e9fa4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-mwxxc                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     75s
	  kube-system                 etcd-pause-598105                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         82s
	  kube-system                 kindnet-x2zhm                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      76s
	  kube-system                 kube-apiserver-pause-598105             250m (12%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-controller-manager-pause-598105    200m (10%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-proxy-gg7cn                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-scheduler-pause-598105             100m (5%)     0 (0%)      0 (0%)           0 (0%)         81s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 74s   kube-proxy       
	  Normal   Starting                 16s   kube-proxy       
	  Normal   Starting                 81s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 81s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  81s   kubelet          Node pause-598105 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    81s   kubelet          Node pause-598105 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     81s   kubelet          Node pause-598105 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           77s   node-controller  Node pause-598105 event: Registered Node pause-598105 in Controller
	  Normal   NodeReady                34s   kubelet          Node pause-598105 status is now: NodeReady
	  Normal   RegisteredNode           13s   node-controller  Node pause-598105 event: Registered Node pause-598105 in Controller
	
	
	==> dmesg <==
	[Oct25 10:00] overlayfs: idmapped layers are currently not supported
	[Oct25 10:01] overlayfs: idmapped layers are currently not supported
	[Oct25 10:02] overlayfs: idmapped layers are currently not supported
	[  +3.771525] overlayfs: idmapped layers are currently not supported
	[ +47.892456] overlayfs: idmapped layers are currently not supported
	[Oct25 10:03] overlayfs: idmapped layers are currently not supported
	[Oct25 10:04] overlayfs: idmapped layers are currently not supported
	[Oct25 10:09] overlayfs: idmapped layers are currently not supported
	[ +31.156008] overlayfs: idmapped layers are currently not supported
	[Oct25 10:11] overlayfs: idmapped layers are currently not supported
	[Oct25 10:12] overlayfs: idmapped layers are currently not supported
	[Oct25 10:13] overlayfs: idmapped layers are currently not supported
	[Oct25 10:14] overlayfs: idmapped layers are currently not supported
	[Oct25 10:15] overlayfs: idmapped layers are currently not supported
	[ +16.057450] overlayfs: idmapped layers are currently not supported
	[Oct25 10:16] overlayfs: idmapped layers are currently not supported
	[ +24.917476] overlayfs: idmapped layers are currently not supported
	[Oct25 10:17] overlayfs: idmapped layers are currently not supported
	[ +27.290615] overlayfs: idmapped layers are currently not supported
	[Oct25 10:19] overlayfs: idmapped layers are currently not supported
	[Oct25 10:20] overlayfs: idmapped layers are currently not supported
	[Oct25 10:21] overlayfs: idmapped layers are currently not supported
	[Oct25 10:22] overlayfs: idmapped layers are currently not supported
	[ +31.000692] overlayfs: idmapped layers are currently not supported
	[Oct25 10:24] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [40f252c2f93511d0cf5758ba10d102806b290bc96aa42501a78a329b693e52c4] <==
	{"level":"warn","ts":"2025-10-25T10:25:04.483031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:25:04.540082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:25:04.580262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:25:04.651488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:25:04.761969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:25:04.769509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:25:04.880467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55270","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T10:26:00.135303Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-25T10:26:00.135365Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-598105","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-10-25T10:26:00.135495Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-25T10:26:00.476820Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-25T10:26:00.478370Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-25T10:26:00.478443Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-25T10:26:00.478535Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-25T10:26:00.478560Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-10-25T10:26:00.478565Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"error","ts":"2025-10-25T10:26:00.478569Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-25T10:26:00.478546Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-25T10:26:00.478589Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T10:26:00.478626Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-25T10:26:00.478640Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-25T10:26:00.482109Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-10-25T10:26:00.482202Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T10:26:00.482244Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-25T10:26:00.482257Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-598105","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [bf9ced8f087cd758796eeb7bbb83271e0303a80f6596c05bec23f83162420ca2] <==
	{"level":"warn","ts":"2025-10-25T10:26:11.168594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.187353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.212245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.228045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.242776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.265924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.279051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.303008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.313752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.339659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.352489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.373802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.384121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.401621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.426220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.438120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.484979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.496356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.510638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.526231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.549819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.574112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.593898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.615209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.678346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34846","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:26:29 up  2:08,  0 user,  load average: 1.95, 2.90, 2.60
	Linux pause-598105 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [567f9d3b15fc7f8cdc27eaa6ffd20100bea3644c1a3142d939c47487ce7d1d1d] <==
	I1025 10:26:07.794458       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:26:07.797912       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 10:26:07.798062       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:26:07.798075       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:26:07.798086       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:26:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:26:08.013591       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:26:08.014217       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:26:08.014268       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:26:08.023122       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 10:26:12.923997       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:26:12.924080       1 metrics.go:72] Registering metrics
	I1025 10:26:12.924219       1 controller.go:711] "Syncing nftables rules"
	I1025 10:26:18.001983       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:26:18.002163       1 main.go:301] handling current node
	I1025 10:26:28.003254       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:26:28.003343       1 main.go:301] handling current node
	
	
	==> kindnet [f8ac7113586ad0c3590df29847e589f83cd8305959fe654e545a7ddbbe0b00eb] <==
	I1025 10:25:15.080369       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:25:15.080623       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 10:25:15.080755       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:25:15.080776       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:25:15.080790       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:25:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:25:15.282830       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:25:15.283020       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:25:15.283042       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:25:15.375399       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 10:25:45.283761       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 10:25:45.376809       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 10:25:45.376821       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 10:25:45.376912       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1025 10:25:46.983553       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:25:46.983710       1 metrics.go:72] Registering metrics
	I1025 10:25:46.983854       1 controller.go:711] "Syncing nftables rules"
	I1025 10:25:55.288553       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:25:55.288613       1 main.go:301] handling current node
	
	
	==> kube-apiserver [12e15ac9eaae3cc1b3ada04843bd01215519f12759752badde06818b40df81d2] <==
	I1025 10:26:12.801575       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1025 10:26:12.801963       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1025 10:26:12.802016       1 policy_source.go:240] refreshing policies
	I1025 10:26:12.802149       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 10:26:12.807562       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:26:12.817237       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1025 10:26:12.842935       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 10:26:12.817747       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 10:26:12.824696       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:26:12.824745       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 10:26:12.847296       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 10:26:12.837048       1 aggregator.go:171] initial CRD sync complete...
	I1025 10:26:12.847516       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 10:26:12.847524       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 10:26:12.847536       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:26:12.846114       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 10:26:12.881495       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1025 10:26:12.881616       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 10:26:12.881659       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 10:26:13.510220       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:26:14.736794       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:26:16.137252       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:26:16.338142       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:26:16.388090       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:26:16.537742       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [c030a64bfba8184fa15e4b55946874e6f550fdb94f2645fe0fe5f83cc5c1445b] <==
	W1025 10:26:00.191306       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.191399       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.191497       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.191591       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.191680       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.191773       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.191886       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.191983       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.192084       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.210699       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.210790       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.210846       1 logging.go:55] [core] [Channel #17 SubChannel #21]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.210896       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.210949       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.211007       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.211065       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.211123       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.211264       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.211340       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.215700       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.215893       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.216008       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.216258       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.216411       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.218198       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [54304becc9c9b39308914a5ecf29a26df5b8f4c7e53eb4b23245f4903e1707bd] <==
	I1025 10:26:16.132028       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 10:26:16.132062       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1025 10:26:16.132112       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 10:26:16.139378       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 10:26:16.141575       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 10:26:16.144880       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:26:16.149987       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 10:26:16.152256       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 10:26:16.155067       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 10:26:16.155072       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:26:16.157289       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 10:26:16.159541       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 10:26:16.163806       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:26:16.163891       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 10:26:16.163907       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 10:26:16.167939       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 10:26:16.172304       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 10:26:16.175622       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:26:16.180065       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 10:26:16.180175       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 10:26:16.180261       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 10:26:16.180346       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-598105"
	I1025 10:26:16.180393       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 10:26:16.180550       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 10:26:16.180855       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	
	
	==> kube-controller-manager [f988c3e30953fb9f8538ab5b9751389b5fea271fe794156bb9301f708ecaac3a] <==
	I1025 10:25:12.722486       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1025 10:25:12.722541       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1025 10:25:12.722571       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1025 10:25:12.722599       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1025 10:25:12.723268       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:25:12.726611       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 10:25:12.734969       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 10:25:12.740455       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 10:25:12.740713       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-598105" podCIDRs=["10.244.0.0/24"]
	I1025 10:25:12.746371       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1025 10:25:12.759703       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:25:12.760752       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 10:25:12.760781       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 10:25:12.760806       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 10:25:12.760868       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 10:25:12.761456       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 10:25:12.763191       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 10:25:12.764264       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 10:25:12.765514       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 10:25:12.765564       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 10:25:12.766841       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:25:12.766856       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 10:25:12.766863       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 10:25:12.772235       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 10:25:57.718506       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a687e0761f69eb04b9cc7dc6eac7e0f7b9d90ee67f58072fdc0880f4eb52fd5b] <==
	I1025 10:25:15.577350       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:25:15.657043       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:25:15.757252       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:25:15.757373       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1025 10:25:15.757535       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:25:15.776880       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:25:15.776935       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:25:15.780922       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:25:15.781262       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:25:15.781342       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:25:15.784595       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:25:15.784689       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:25:15.785107       1 config.go:200] "Starting service config controller"
	I1025 10:25:15.785159       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:25:15.785487       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:25:15.785550       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:25:15.785997       1 config.go:309] "Starting node config controller"
	I1025 10:25:15.786060       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:25:15.786090       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:25:15.885309       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 10:25:15.885316       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 10:25:15.885636       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [e19bbf1f4eed5c911d637106fa9f035c92b7e0210f15222f9a9123d09622c90b] <==
	I1025 10:26:10.473355       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:26:11.308061       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:26:12.923121       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:26:12.923264       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1025 10:26:12.923400       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:26:12.993612       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:26:12.993723       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:26:13.010193       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:26:13.010469       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:26:13.010493       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:26:13.013160       1 config.go:200] "Starting service config controller"
	I1025 10:26:13.013183       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:26:13.013199       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:26:13.013204       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:26:13.013214       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:26:13.013232       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:26:13.013514       1 config.go:309] "Starting node config controller"
	I1025 10:26:13.013524       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:26:13.113879       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:26:13.113911       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 10:26:13.113927       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 10:26:13.113937       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7a7371c136a2c320fe9995dc5d7d2acffc0c01af23cb2964be84fe31334bb21f] <==
	I1025 10:26:10.156263       1 serving.go:386] Generated self-signed cert in-memory
	W1025 10:26:12.659526       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 10:26:12.659634       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 10:26:12.659668       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 10:26:12.659696       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 10:26:12.787755       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 10:26:12.787793       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:26:12.793347       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:26:12.793393       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:26:12.794214       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 10:26:12.794294       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 10:26:12.895994       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [7cfc933f8e7401faf3db3a012b28cf8021fdfdc23609781c8432a46243927082] <==
	E1025 10:25:06.490804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 10:25:06.490854       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 10:25:06.495240       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 10:25:06.495390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 10:25:06.496315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 10:25:06.498870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 10:25:06.499028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 10:25:06.499134       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 10:25:06.499287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 10:25:06.499317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 10:25:06.499421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 10:25:06.499495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 10:25:06.499569       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 10:25:06.499640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 10:25:06.499726       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 10:25:06.499869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 10:25:06.499441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 10:25:06.500331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1025 10:25:07.886587       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:26:00.138565       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1025 10:26:00.138621       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1025 10:26:00.138648       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1025 10:26:00.138684       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:26:00.139319       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1025 10:26:00.139346       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 25 10:26:07 pause-598105 kubelet[1297]: E1025 10:26:07.645468    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-598105\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d0b870271552b247f514ca996eff9377" pod="kube-system/kube-apiserver-pause-598105"
	Oct 25 10:26:07 pause-598105 kubelet[1297]: E1025 10:26:07.645607    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-598105\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3b472855df79fc609c4d241ac0d24faf" pod="kube-system/kube-scheduler-pause-598105"
	Oct 25 10:26:07 pause-598105 kubelet[1297]: E1025 10:26:07.645742    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-x2zhm\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="183c57f8-d19b-4e10-b018-d0518418dc4e" pod="kube-system/kindnet-x2zhm"
	Oct 25 10:26:07 pause-598105 kubelet[1297]: I1025 10:26:07.649858    1297 scope.go:117] "RemoveContainer" containerID="7cfc933f8e7401faf3db3a012b28cf8021fdfdc23609781c8432a46243927082"
	Oct 25 10:26:07 pause-598105 kubelet[1297]: E1025 10:26:07.650342    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-598105\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3b472855df79fc609c4d241ac0d24faf" pod="kube-system/kube-scheduler-pause-598105"
	Oct 25 10:26:07 pause-598105 kubelet[1297]: E1025 10:26:07.650522    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-x2zhm\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="183c57f8-d19b-4e10-b018-d0518418dc4e" pod="kube-system/kindnet-x2zhm"
	Oct 25 10:26:07 pause-598105 kubelet[1297]: E1025 10:26:07.650672    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-mwxxc\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="5aef0e38-29c2-4dbc-b75d-96b9454113b4" pod="kube-system/coredns-66bc5c9577-mwxxc"
	Oct 25 10:26:07 pause-598105 kubelet[1297]: E1025 10:26:07.650831    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-598105\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3e39b84e1439a35b8cc4ca27447f425f" pod="kube-system/etcd-pause-598105"
	Oct 25 10:26:07 pause-598105 kubelet[1297]: E1025 10:26:07.650982    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-598105\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="48af73e8293ba75a878f8d53435bf781" pod="kube-system/kube-controller-manager-pause-598105"
	Oct 25 10:26:07 pause-598105 kubelet[1297]: E1025 10:26:07.651130    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-598105\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d0b870271552b247f514ca996eff9377" pod="kube-system/kube-apiserver-pause-598105"
	Oct 25 10:26:07 pause-598105 kubelet[1297]: I1025 10:26:07.659069    1297 scope.go:117] "RemoveContainer" containerID="a687e0761f69eb04b9cc7dc6eac7e0f7b9d90ee67f58072fdc0880f4eb52fd5b"
	Oct 25 10:26:07 pause-598105 kubelet[1297]: E1025 10:26:07.659568    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-598105\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d0b870271552b247f514ca996eff9377" pod="kube-system/kube-apiserver-pause-598105"
	Oct 25 10:26:07 pause-598105 kubelet[1297]: E1025 10:26:07.659755    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-598105\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3b472855df79fc609c4d241ac0d24faf" pod="kube-system/kube-scheduler-pause-598105"
	Oct 25 10:26:07 pause-598105 kubelet[1297]: E1025 10:26:07.659921    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gg7cn\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="8afd0eec-98cc-4d94-ac83-e6734161aea0" pod="kube-system/kube-proxy-gg7cn"
	Oct 25 10:26:07 pause-598105 kubelet[1297]: E1025 10:26:07.660078    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-x2zhm\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="183c57f8-d19b-4e10-b018-d0518418dc4e" pod="kube-system/kindnet-x2zhm"
	Oct 25 10:26:07 pause-598105 kubelet[1297]: E1025 10:26:07.660244    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-mwxxc\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="5aef0e38-29c2-4dbc-b75d-96b9454113b4" pod="kube-system/coredns-66bc5c9577-mwxxc"
	Oct 25 10:26:07 pause-598105 kubelet[1297]: E1025 10:26:07.660403    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-598105\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3e39b84e1439a35b8cc4ca27447f425f" pod="kube-system/etcd-pause-598105"
	Oct 25 10:26:07 pause-598105 kubelet[1297]: E1025 10:26:07.660558    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-598105\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="48af73e8293ba75a878f8d53435bf781" pod="kube-system/kube-controller-manager-pause-598105"
	Oct 25 10:26:12 pause-598105 kubelet[1297]: E1025 10:26:12.675771    1297 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:pause-598105\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-598105' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Oct 25 10:26:12 pause-598105 kubelet[1297]: E1025 10:26:12.677576    1297 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-598105\" is forbidden: User \"system:node:pause-598105\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-598105' and this object" podUID="d0b870271552b247f514ca996eff9377" pod="kube-system/kube-apiserver-pause-598105"
	Oct 25 10:26:12 pause-598105 kubelet[1297]: E1025 10:26:12.725893    1297 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-598105\" is forbidden: User \"system:node:pause-598105\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-598105' and this object" podUID="3b472855df79fc609c4d241ac0d24faf" pod="kube-system/kube-scheduler-pause-598105"
	Oct 25 10:26:12 pause-598105 kubelet[1297]: E1025 10:26:12.762172    1297 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-gg7cn\" is forbidden: User \"system:node:pause-598105\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-598105' and this object" podUID="8afd0eec-98cc-4d94-ac83-e6734161aea0" pod="kube-system/kube-proxy-gg7cn"
	Oct 25 10:26:26 pause-598105 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 10:26:26 pause-598105 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 10:26:26 pause-598105 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-598105 -n pause-598105
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-598105 -n pause-598105: exit status 2 (388.92105ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-598105 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-598105
helpers_test.go:243: (dbg) docker inspect pause-598105:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7ac8227c606816bdda73938cbff51778d2dfa1d3f862a01d67154d03e82037a9",
	        "Created": "2025-10-25T10:24:41.376840239Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 449937,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:24:41.443125084Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/7ac8227c606816bdda73938cbff51778d2dfa1d3f862a01d67154d03e82037a9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7ac8227c606816bdda73938cbff51778d2dfa1d3f862a01d67154d03e82037a9/hostname",
	        "HostsPath": "/var/lib/docker/containers/7ac8227c606816bdda73938cbff51778d2dfa1d3f862a01d67154d03e82037a9/hosts",
	        "LogPath": "/var/lib/docker/containers/7ac8227c606816bdda73938cbff51778d2dfa1d3f862a01d67154d03e82037a9/7ac8227c606816bdda73938cbff51778d2dfa1d3f862a01d67154d03e82037a9-json.log",
	        "Name": "/pause-598105",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-598105:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-598105",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7ac8227c606816bdda73938cbff51778d2dfa1d3f862a01d67154d03e82037a9",
	                "LowerDir": "/var/lib/docker/overlay2/bb69e24e0e50222ba98e8f411cf8b463f98cf01f37f37e09d8381aecdf0e2a4a-init/diff:/var/lib/docker/overlay2/56244b4f60319ba406f5ebe406da3f45bd5764c167111669ae2d84794cf94518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bb69e24e0e50222ba98e8f411cf8b463f98cf01f37f37e09d8381aecdf0e2a4a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bb69e24e0e50222ba98e8f411cf8b463f98cf01f37f37e09d8381aecdf0e2a4a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bb69e24e0e50222ba98e8f411cf8b463f98cf01f37f37e09d8381aecdf0e2a4a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-598105",
	                "Source": "/var/lib/docker/volumes/pause-598105/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-598105",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-598105",
	                "name.minikube.sigs.k8s.io": "pause-598105",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "25cff9f3863d49a8056c76e564466dd9b76ecd41ce771e0315b8510499a78f0d",
	            "SandboxKey": "/var/run/docker/netns/25cff9f3863d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33397"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33398"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33401"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33399"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33400"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-598105": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:45:95:e5:dc:7e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "45375a1d630ca21e9d5e39c177795db37f449d933054370e1fa920adf3c027d9",
	                    "EndpointID": "624292d51ce4b0d4a90ac2f8dc0270c107cd617bce6b1f24bd54fb1011666386",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-598105",
	                        "7ac8227c6068"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-598105 -n pause-598105
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-598105 -n pause-598105: exit status 2 (341.193232ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-598105 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-598105 logs -n 25: (1.352115964s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-704940 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                            │ NoKubernetes-704940       │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │                     │
	│ start   │ -p NoKubernetes-704940 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-704940       │ jenkins │ v1.37.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:21 UTC │
	│ start   │ -p missing-upgrade-353666 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-353666    │ jenkins │ v1.32.0 │ 25 Oct 25 10:20 UTC │ 25 Oct 25 10:21 UTC │
	│ start   │ -p NoKubernetes-704940 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-704940       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ delete  │ -p NoKubernetes-704940                                                                                                                   │ NoKubernetes-704940       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ start   │ -p NoKubernetes-704940 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-704940       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ ssh     │ -p NoKubernetes-704940 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-704940       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │                     │
	│ start   │ -p missing-upgrade-353666 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-353666    │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:22 UTC │
	│ stop    │ -p NoKubernetes-704940                                                                                                                   │ NoKubernetes-704940       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ start   │ -p NoKubernetes-704940 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-704940       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:21 UTC │
	│ ssh     │ -p NoKubernetes-704940 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-704940       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │                     │
	│ delete  │ -p NoKubernetes-704940                                                                                                                   │ NoKubernetes-704940       │ jenkins │ v1.37.0 │ 25 Oct 25 10:21 UTC │ 25 Oct 25 10:22 UTC │
	│ start   │ -p kubernetes-upgrade-845331 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-845331 │ jenkins │ v1.37.0 │ 25 Oct 25 10:22 UTC │ 25 Oct 25 10:22 UTC │
	│ delete  │ -p missing-upgrade-353666                                                                                                                │ missing-upgrade-353666    │ jenkins │ v1.37.0 │ 25 Oct 25 10:22 UTC │ 25 Oct 25 10:22 UTC │
	│ start   │ -p stopped-upgrade-853068 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-853068    │ jenkins │ v1.32.0 │ 25 Oct 25 10:22 UTC │ 25 Oct 25 10:23 UTC │
	│ start   │ -p kubernetes-upgrade-845331 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-845331 │ jenkins │ v1.37.0 │ 25 Oct 25 10:22 UTC │                     │
	│ stop    │ stopped-upgrade-853068 stop                                                                                                              │ stopped-upgrade-853068    │ jenkins │ v1.32.0 │ 25 Oct 25 10:23 UTC │ 25 Oct 25 10:23 UTC │
	│ start   │ -p stopped-upgrade-853068 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-853068    │ jenkins │ v1.37.0 │ 25 Oct 25 10:23 UTC │ 25 Oct 25 10:23 UTC │
	│ delete  │ -p stopped-upgrade-853068                                                                                                                │ stopped-upgrade-853068    │ jenkins │ v1.37.0 │ 25 Oct 25 10:23 UTC │ 25 Oct 25 10:23 UTC │
	│ start   │ -p running-upgrade-567548 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-567548    │ jenkins │ v1.32.0 │ 25 Oct 25 10:23 UTC │ 25 Oct 25 10:24 UTC │
	│ start   │ -p running-upgrade-567548 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-567548    │ jenkins │ v1.37.0 │ 25 Oct 25 10:24 UTC │ 25 Oct 25 10:24 UTC │
	│ delete  │ -p running-upgrade-567548                                                                                                                │ running-upgrade-567548    │ jenkins │ v1.37.0 │ 25 Oct 25 10:24 UTC │ 25 Oct 25 10:24 UTC │
	│ start   │ -p pause-598105 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-598105              │ jenkins │ v1.37.0 │ 25 Oct 25 10:24 UTC │ 25 Oct 25 10:25 UTC │
	│ start   │ -p pause-598105 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-598105              │ jenkins │ v1.37.0 │ 25 Oct 25 10:25 UTC │ 25 Oct 25 10:26 UTC │
	│ pause   │ -p pause-598105 --alsologtostderr -v=5                                                                                                   │ pause-598105              │ jenkins │ v1.37.0 │ 25 Oct 25 10:26 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:25:58
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:25:58.673660  454182 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:25:58.673838  454182 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:25:58.673868  454182 out.go:374] Setting ErrFile to fd 2...
	I1025 10:25:58.673888  454182 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:25:58.674245  454182 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 10:25:58.674666  454182 out.go:368] Setting JSON to false
	I1025 10:25:58.675981  454182 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7709,"bootTime":1761380250,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1025 10:25:58.676087  454182 start.go:141] virtualization:  
	I1025 10:25:58.679207  454182 out.go:179] * [pause-598105] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:25:58.683300  454182 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 10:25:58.683415  454182 notify.go:220] Checking for updates...
	I1025 10:25:58.689570  454182 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:25:58.692617  454182 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:25:58.695533  454182 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-292167/.minikube
	I1025 10:25:58.698428  454182 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:25:58.701335  454182 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:25:58.704769  454182 config.go:182] Loaded profile config "pause-598105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:25:58.705324  454182 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:25:58.734489  454182 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:25:58.734611  454182 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:25:58.799018  454182 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-25 10:25:58.789583652 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:25:58.799168  454182 docker.go:318] overlay module found
	I1025 10:25:58.802389  454182 out.go:179] * Using the docker driver based on existing profile
	I1025 10:25:58.805317  454182 start.go:305] selected driver: docker
	I1025 10:25:58.805338  454182 start.go:925] validating driver "docker" against &{Name:pause-598105 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-598105 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:25:58.805475  454182 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:25:58.805585  454182 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:25:58.868444  454182 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-25 10:25:58.859468956 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:25:58.868869  454182 cni.go:84] Creating CNI manager for ""
	I1025 10:25:58.868940  454182 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:25:58.868994  454182 start.go:349] cluster config:
	{Name:pause-598105 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-598105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:25:58.874046  454182 out.go:179] * Starting "pause-598105" primary control-plane node in "pause-598105" cluster
	I1025 10:25:58.876871  454182 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:25:58.879850  454182 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:25:58.882792  454182 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:25:58.882876  454182 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 10:25:58.882886  454182 cache.go:58] Caching tarball of preloaded images
	I1025 10:25:58.882926  454182 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:25:58.883027  454182 preload.go:233] Found /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:25:58.883043  454182 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:25:58.883235  454182 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/pause-598105/config.json ...
	I1025 10:25:58.902344  454182 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:25:58.902370  454182 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:25:58.902391  454182 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:25:58.902416  454182 start.go:360] acquireMachinesLock for pause-598105: {Name:mk7275af11579743c9d1d77cd490c241a80c1ffa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:25:58.902471  454182 start.go:364] duration metric: took 37.572µs to acquireMachinesLock for "pause-598105"
	I1025 10:25:58.902496  454182 start.go:96] Skipping create...Using existing machine configuration
	I1025 10:25:58.902502  454182 fix.go:54] fixHost starting: 
	I1025 10:25:58.902769  454182 cli_runner.go:164] Run: docker container inspect pause-598105 --format={{.State.Status}}
	I1025 10:25:58.936359  454182 fix.go:112] recreateIfNeeded on pause-598105: state=Running err=<nil>
	W1025 10:25:58.936388  454182 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 10:25:55.873222  438892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:25:55.884427  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:25:55.884516  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:25:55.927204  438892 cri.go:89] found id: ""
	I1025 10:25:55.927247  438892 logs.go:282] 0 containers: []
	W1025 10:25:55.927266  438892 logs.go:284] No container was found matching "kube-apiserver"
	I1025 10:25:55.927274  438892 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:25:55.927348  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:25:55.964946  438892 cri.go:89] found id: ""
	I1025 10:25:55.964972  438892 logs.go:282] 0 containers: []
	W1025 10:25:55.964981  438892 logs.go:284] No container was found matching "etcd"
	I1025 10:25:55.964987  438892 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:25:55.965060  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:25:55.994528  438892 cri.go:89] found id: ""
	I1025 10:25:55.994556  438892 logs.go:282] 0 containers: []
	W1025 10:25:55.994565  438892 logs.go:284] No container was found matching "coredns"
	I1025 10:25:55.994572  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:25:55.994636  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:25:56.023599  438892 cri.go:89] found id: ""
	I1025 10:25:56.023623  438892 logs.go:282] 0 containers: []
	W1025 10:25:56.023632  438892 logs.go:284] No container was found matching "kube-scheduler"
	I1025 10:25:56.023639  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:25:56.023698  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:25:56.052674  438892 cri.go:89] found id: ""
	I1025 10:25:56.052703  438892 logs.go:282] 0 containers: []
	W1025 10:25:56.052713  438892 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:25:56.052720  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:25:56.052779  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:25:56.082456  438892 cri.go:89] found id: ""
	I1025 10:25:56.082483  438892 logs.go:282] 0 containers: []
	W1025 10:25:56.082494  438892 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 10:25:56.082501  438892 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:25:56.082560  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:25:56.108696  438892 cri.go:89] found id: ""
	I1025 10:25:56.108722  438892 logs.go:282] 0 containers: []
	W1025 10:25:56.108731  438892 logs.go:284] No container was found matching "kindnet"
	I1025 10:25:56.108737  438892 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:25:56.108795  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:25:56.136778  438892 cri.go:89] found id: ""
	I1025 10:25:56.136858  438892 logs.go:282] 0 containers: []
	W1025 10:25:56.136874  438892 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:25:56.136884  438892 logs.go:123] Gathering logs for kubelet ...
	I1025 10:25:56.136901  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:25:56.257022  438892 logs.go:123] Gathering logs for dmesg ...
	I1025 10:25:56.257059  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:25:56.273028  438892 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:25:56.273056  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:25:56.337738  438892 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:25:56.337761  438892 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:25:56.337784  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:25:56.374950  438892 logs.go:123] Gathering logs for container status ...
	I1025 10:25:56.374984  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 10:25:58.917033  438892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:25:58.930997  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:25:58.931071  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:25:58.977248  438892 cri.go:89] found id: ""
	I1025 10:25:58.977272  438892 logs.go:282] 0 containers: []
	W1025 10:25:58.977281  438892 logs.go:284] No container was found matching "kube-apiserver"
	I1025 10:25:58.977288  438892 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:25:58.977354  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:25:59.008408  438892 cri.go:89] found id: ""
	I1025 10:25:59.008429  438892 logs.go:282] 0 containers: []
	W1025 10:25:59.008438  438892 logs.go:284] No container was found matching "etcd"
	I1025 10:25:59.008445  438892 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:25:59.008508  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:25:59.045488  438892 cri.go:89] found id: ""
	I1025 10:25:59.045510  438892 logs.go:282] 0 containers: []
	W1025 10:25:59.045519  438892 logs.go:284] No container was found matching "coredns"
	I1025 10:25:59.045525  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:25:59.045582  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:25:59.077820  438892 cri.go:89] found id: ""
	I1025 10:25:59.077843  438892 logs.go:282] 0 containers: []
	W1025 10:25:59.077851  438892 logs.go:284] No container was found matching "kube-scheduler"
	I1025 10:25:59.077857  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:25:59.077924  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:25:59.105802  438892 cri.go:89] found id: ""
	I1025 10:25:59.105825  438892 logs.go:282] 0 containers: []
	W1025 10:25:59.105833  438892 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:25:59.105839  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:25:59.105897  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:25:59.141324  438892 cri.go:89] found id: ""
	I1025 10:25:59.141415  438892 logs.go:282] 0 containers: []
	W1025 10:25:59.141427  438892 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 10:25:59.141435  438892 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:25:59.141516  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:25:59.176002  438892 cri.go:89] found id: ""
	I1025 10:25:59.176025  438892 logs.go:282] 0 containers: []
	W1025 10:25:59.176033  438892 logs.go:284] No container was found matching "kindnet"
	I1025 10:25:59.176039  438892 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:25:59.176097  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:25:59.232504  438892 cri.go:89] found id: ""
	I1025 10:25:59.232525  438892 logs.go:282] 0 containers: []
	W1025 10:25:59.232533  438892 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:25:59.232542  438892 logs.go:123] Gathering logs for kubelet ...
	I1025 10:25:59.232553  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:25:59.377757  438892 logs.go:123] Gathering logs for dmesg ...
	I1025 10:25:59.377835  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:25:59.395665  438892 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:25:59.395740  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:25:59.478555  438892 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:25:59.478585  438892 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:25:59.478597  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:25:59.517317  438892 logs.go:123] Gathering logs for container status ...
	I1025 10:25:59.517353  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 10:25:58.940710  454182 out.go:252] * Updating the running docker "pause-598105" container ...
	I1025 10:25:58.940747  454182 machine.go:93] provisionDockerMachine start ...
	I1025 10:25:58.940836  454182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-598105
	I1025 10:25:58.959746  454182 main.go:141] libmachine: Using SSH client type: native
	I1025 10:25:58.960077  454182 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33397 <nil> <nil>}
	I1025 10:25:58.960093  454182 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:25:59.123379  454182 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-598105
	
	I1025 10:25:59.123415  454182 ubuntu.go:182] provisioning hostname "pause-598105"
	I1025 10:25:59.123480  454182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-598105
	I1025 10:25:59.147226  454182 main.go:141] libmachine: Using SSH client type: native
	I1025 10:25:59.147533  454182 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33397 <nil> <nil>}
	I1025 10:25:59.147545  454182 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-598105 && echo "pause-598105" | sudo tee /etc/hostname
	I1025 10:25:59.325360  454182 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-598105
	
	I1025 10:25:59.325437  454182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-598105
	I1025 10:25:59.347660  454182 main.go:141] libmachine: Using SSH client type: native
	I1025 10:25:59.347968  454182 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33397 <nil> <nil>}
	I1025 10:25:59.347986  454182 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-598105' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-598105/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-598105' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:25:59.512624  454182 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:25:59.512647  454182 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-292167/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-292167/.minikube}
	I1025 10:25:59.512671  454182 ubuntu.go:190] setting up certificates
	I1025 10:25:59.512681  454182 provision.go:84] configureAuth start
	I1025 10:25:59.512748  454182 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-598105
	I1025 10:25:59.535547  454182 provision.go:143] copyHostCerts
	I1025 10:25:59.535610  454182 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem, removing ...
	I1025 10:25:59.535634  454182 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem
	I1025 10:25:59.535715  454182 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem (1675 bytes)
	I1025 10:25:59.535818  454182 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem, removing ...
	I1025 10:25:59.535830  454182 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem
	I1025 10:25:59.535860  454182 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem (1082 bytes)
	I1025 10:25:59.535924  454182 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem, removing ...
	I1025 10:25:59.535933  454182 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem
	I1025 10:25:59.535958  454182 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem (1123 bytes)
	I1025 10:25:59.536020  454182 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem org=jenkins.pause-598105 san=[127.0.0.1 192.168.85.2 localhost minikube pause-598105]
	I1025 10:25:59.667515  454182 provision.go:177] copyRemoteCerts
	I1025 10:25:59.667584  454182 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:25:59.667633  454182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-598105
	I1025 10:25:59.689598  454182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33397 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/pause-598105/id_rsa Username:docker}
	I1025 10:25:59.794972  454182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 10:25:59.812515  454182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1025 10:25:59.830616  454182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:25:59.849677  454182 provision.go:87] duration metric: took 336.972364ms to configureAuth
	I1025 10:25:59.849748  454182 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:25:59.849988  454182 config.go:182] Loaded profile config "pause-598105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:25:59.850110  454182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-598105
	I1025 10:25:59.867250  454182 main.go:141] libmachine: Using SSH client type: native
	I1025 10:25:59.867551  454182 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33397 <nil> <nil>}
	I1025 10:25:59.867572  454182 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:26:02.088582  438892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:26:02.099126  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:26:02.099217  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:26:02.129137  438892 cri.go:89] found id: ""
	I1025 10:26:02.129168  438892 logs.go:282] 0 containers: []
	W1025 10:26:02.129177  438892 logs.go:284] No container was found matching "kube-apiserver"
	I1025 10:26:02.129185  438892 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:26:02.129244  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:26:02.158327  438892 cri.go:89] found id: ""
	I1025 10:26:02.158354  438892 logs.go:282] 0 containers: []
	W1025 10:26:02.158362  438892 logs.go:284] No container was found matching "etcd"
	I1025 10:26:02.158369  438892 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:26:02.158429  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:26:02.183796  438892 cri.go:89] found id: ""
	I1025 10:26:02.183824  438892 logs.go:282] 0 containers: []
	W1025 10:26:02.183833  438892 logs.go:284] No container was found matching "coredns"
	I1025 10:26:02.183842  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:26:02.183920  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:26:02.214150  438892 cri.go:89] found id: ""
	I1025 10:26:02.214266  438892 logs.go:282] 0 containers: []
	W1025 10:26:02.214280  438892 logs.go:284] No container was found matching "kube-scheduler"
	I1025 10:26:02.214287  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:26:02.214353  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:26:02.246010  438892 cri.go:89] found id: ""
	I1025 10:26:02.246036  438892 logs.go:282] 0 containers: []
	W1025 10:26:02.246044  438892 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:26:02.246051  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:26:02.246114  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:26:02.272221  438892 cri.go:89] found id: ""
	I1025 10:26:02.272248  438892 logs.go:282] 0 containers: []
	W1025 10:26:02.272258  438892 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 10:26:02.272264  438892 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:26:02.272326  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:26:02.297128  438892 cri.go:89] found id: ""
	I1025 10:26:02.297206  438892 logs.go:282] 0 containers: []
	W1025 10:26:02.297229  438892 logs.go:284] No container was found matching "kindnet"
	I1025 10:26:02.297248  438892 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:26:02.297319  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:26:02.320647  438892 cri.go:89] found id: ""
	I1025 10:26:02.320672  438892 logs.go:282] 0 containers: []
	W1025 10:26:02.320691  438892 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:26:02.320715  438892 logs.go:123] Gathering logs for kubelet ...
	I1025 10:26:02.320735  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:26:02.436710  438892 logs.go:123] Gathering logs for dmesg ...
	I1025 10:26:02.436795  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:26:02.453297  438892 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:26:02.453332  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:26:02.526489  438892 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:26:02.526564  438892 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:26:02.526585  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:26:02.563093  438892 logs.go:123] Gathering logs for container status ...
	I1025 10:26:02.563128  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 10:26:05.347103  454182 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:26:05.347127  454182 machine.go:96] duration metric: took 6.406371294s to provisionDockerMachine
	I1025 10:26:05.347137  454182 start.go:293] postStartSetup for "pause-598105" (driver="docker")
	I1025 10:26:05.347175  454182 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:26:05.347238  454182 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:26:05.347295  454182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-598105
	I1025 10:26:05.372342  454182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33397 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/pause-598105/id_rsa Username:docker}
	I1025 10:26:05.481828  454182 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:26:05.485543  454182 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:26:05.485582  454182 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:26:05.485595  454182 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/addons for local assets ...
	I1025 10:26:05.485651  454182 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/files for local assets ...
	I1025 10:26:05.485728  454182 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem -> 2940172.pem in /etc/ssl/certs
	I1025 10:26:05.485847  454182 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:26:05.497798  454182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 10:26:05.515671  454182 start.go:296] duration metric: took 168.518473ms for postStartSetup
	I1025 10:26:05.515758  454182 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:26:05.515812  454182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-598105
	I1025 10:26:05.547320  454182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33397 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/pause-598105/id_rsa Username:docker}
	I1025 10:26:05.660655  454182 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:26:05.667633  454182 fix.go:56] duration metric: took 6.765125417s for fixHost
	I1025 10:26:05.667653  454182 start.go:83] releasing machines lock for "pause-598105", held for 6.765168789s
	I1025 10:26:05.667719  454182 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-598105
	I1025 10:26:05.685302  454182 ssh_runner.go:195] Run: cat /version.json
	I1025 10:26:05.685347  454182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-598105
	I1025 10:26:05.685592  454182 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:26:05.685640  454182 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-598105
	I1025 10:26:05.715723  454182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33397 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/pause-598105/id_rsa Username:docker}
	I1025 10:26:05.732411  454182 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33397 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/pause-598105/id_rsa Username:docker}
	I1025 10:26:05.827004  454182 ssh_runner.go:195] Run: systemctl --version
	I1025 10:26:05.919175  454182 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:26:05.958355  454182 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:26:05.962684  454182 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:26:05.962755  454182 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:26:05.971094  454182 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:26:05.971190  454182 start.go:495] detecting cgroup driver to use...
	I1025 10:26:05.971238  454182 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:26:05.971310  454182 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:26:05.986424  454182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:26:05.999727  454182 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:26:05.999794  454182 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:26:06.018201  454182 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:26:06.032399  454182 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:26:06.173711  454182 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:26:06.311729  454182 docker.go:234] disabling docker service ...
	I1025 10:26:06.311860  454182 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:26:06.328180  454182 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:26:06.342228  454182 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:26:06.477580  454182 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:26:06.612403  454182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:26:06.625857  454182 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:26:06.640620  454182 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:26:06.640688  454182 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:26:06.649694  454182 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:26:06.649768  454182 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:26:06.659032  454182 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:26:06.668431  454182 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:26:06.678868  454182 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:26:06.687432  454182 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:26:06.696649  454182 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:26:06.704648  454182 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:26:06.713417  454182 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:26:06.720960  454182 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:26:06.728225  454182 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:26:06.857036  454182 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:26:07.013524  454182 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:26:07.013646  454182 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:26:07.017552  454182 start.go:563] Will wait 60s for crictl version
	I1025 10:26:07.017658  454182 ssh_runner.go:195] Run: which crictl
	I1025 10:26:07.021405  454182 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:26:07.046498  454182 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:26:07.046642  454182 ssh_runner.go:195] Run: crio --version
	I1025 10:26:07.080689  454182 ssh_runner.go:195] Run: crio --version
	I1025 10:26:07.112207  454182 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:26:07.115209  454182 cli_runner.go:164] Run: docker network inspect pause-598105 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:26:07.130417  454182 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1025 10:26:07.134290  454182 kubeadm.go:883] updating cluster {Name:pause-598105 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-598105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:26:07.134432  454182 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:26:07.134506  454182 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:26:07.172289  454182 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:26:07.172315  454182 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:26:07.172371  454182 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:26:07.197361  454182 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:26:07.197388  454182 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:26:07.197396  454182 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1025 10:26:07.197506  454182 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-598105 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-598105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:26:07.197588  454182 ssh_runner.go:195] Run: crio config
	I1025 10:26:07.249157  454182 cni.go:84] Creating CNI manager for ""
	I1025 10:26:07.249184  454182 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:26:07.249203  454182 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:26:07.249226  454182 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-598105 NodeName:pause-598105 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:26:07.249357  454182 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-598105"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:26:07.249435  454182 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:26:07.257536  454182 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:26:07.257688  454182 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:26:07.265427  454182 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1025 10:26:07.278093  454182 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:26:07.291715  454182 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1025 10:26:07.304957  454182 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:26:07.308799  454182 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:26:07.444572  454182 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:26:07.458543  454182 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/pause-598105 for IP: 192.168.85.2
	I1025 10:26:07.458616  454182 certs.go:195] generating shared ca certs ...
	I1025 10:26:07.458646  454182 certs.go:227] acquiring lock for ca certs: {Name:mk9438893ca511bbb3aa6154ee7b6f94b409696d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:26:07.458809  454182 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key
	I1025 10:26:07.458894  454182 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key
	I1025 10:26:07.458929  454182 certs.go:257] generating profile certs ...
	I1025 10:26:07.459048  454182 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/pause-598105/client.key
	I1025 10:26:07.459243  454182 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/pause-598105/apiserver.key.b50a4ea9
	I1025 10:26:07.459325  454182 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/pause-598105/proxy-client.key
	I1025 10:26:07.459462  454182 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem (1338 bytes)
	W1025 10:26:07.459504  454182 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017_empty.pem, impossibly tiny 0 bytes
	I1025 10:26:07.459517  454182 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 10:26:07.459540  454182 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem (1082 bytes)
	I1025 10:26:07.459572  454182 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:26:07.459596  454182 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem (1675 bytes)
	I1025 10:26:07.459643  454182 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 10:26:07.460256  454182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:26:07.478556  454182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:26:07.496895  454182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:26:07.515405  454182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:26:07.532635  454182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/pause-598105/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1025 10:26:07.549408  454182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/pause-598105/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:26:07.567234  454182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/pause-598105/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:26:07.584090  454182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/pause-598105/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 10:26:07.602678  454182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem --> /usr/share/ca-certificates/294017.pem (1338 bytes)
	I1025 10:26:07.638865  454182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /usr/share/ca-certificates/2940172.pem (1708 bytes)
	I1025 10:26:07.676812  454182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:26:07.699264  454182 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:26:07.723755  454182 ssh_runner.go:195] Run: openssl version
	I1025 10:26:07.731569  454182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940172.pem && ln -fs /usr/share/ca-certificates/2940172.pem /etc/ssl/certs/2940172.pem"
	I1025 10:26:07.744578  454182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940172.pem
	I1025 10:26:07.760783  454182 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:39 /usr/share/ca-certificates/2940172.pem
	I1025 10:26:07.760900  454182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940172.pem
	I1025 10:26:07.856417  454182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940172.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:26:07.879924  454182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:26:07.898982  454182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:26:07.908694  454182 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:26:07.908760  454182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:26:07.978444  454182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:26:07.991711  454182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294017.pem && ln -fs /usr/share/ca-certificates/294017.pem /etc/ssl/certs/294017.pem"
	I1025 10:26:08.005176  454182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294017.pem
	I1025 10:26:08.012137  454182 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:39 /usr/share/ca-certificates/294017.pem
	I1025 10:26:08.012296  454182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294017.pem
	I1025 10:26:08.080141  454182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294017.pem /etc/ssl/certs/51391683.0"
	I1025 10:26:08.091343  454182 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:26:08.098797  454182 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:26:08.160585  454182 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:26:08.209805  454182 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:26:08.261473  454182 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:26:08.343939  454182 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:26:08.434887  454182 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:26:08.506599  454182 kubeadm.go:400] StartCluster: {Name:pause-598105 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-598105 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:26:08.506733  454182 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:26:08.506797  454182 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:26:08.552725  454182 cri.go:89] found id: "e19bbf1f4eed5c911d637106fa9f035c92b7e0210f15222f9a9123d09622c90b"
	I1025 10:26:08.552748  454182 cri.go:89] found id: "54304becc9c9b39308914a5ecf29a26df5b8f4c7e53eb4b23245f4903e1707bd"
	I1025 10:26:08.552755  454182 cri.go:89] found id: "7a7371c136a2c320fe9995dc5d7d2acffc0c01af23cb2964be84fe31334bb21f"
	I1025 10:26:08.552759  454182 cri.go:89] found id: "bf9ced8f087cd758796eeb7bbb83271e0303a80f6596c05bec23f83162420ca2"
	I1025 10:26:08.552762  454182 cri.go:89] found id: "12e15ac9eaae3cc1b3ada04843bd01215519f12759752badde06818b40df81d2"
	I1025 10:26:08.552766  454182 cri.go:89] found id: "140627d5948e8c0ffd2c884620b8491eace7b98a1096b00a577032f0de164d71"
	I1025 10:26:08.552770  454182 cri.go:89] found id: "567f9d3b15fc7f8cdc27eaa6ffd20100bea3644c1a3142d939c47487ce7d1d1d"
	I1025 10:26:08.552773  454182 cri.go:89] found id: "c76e4fa5a72c206c0ecc32c7257efd336d9e589076154db2e02e1a902dbd47e2"
	I1025 10:26:08.552776  454182 cri.go:89] found id: "a687e0761f69eb04b9cc7dc6eac7e0f7b9d90ee67f58072fdc0880f4eb52fd5b"
	I1025 10:26:08.552783  454182 cri.go:89] found id: "f8ac7113586ad0c3590df29847e589f83cd8305959fe654e545a7ddbbe0b00eb"
	I1025 10:26:08.552787  454182 cri.go:89] found id: "7cfc933f8e7401faf3db3a012b28cf8021fdfdc23609781c8432a46243927082"
	I1025 10:26:08.552790  454182 cri.go:89] found id: "c030a64bfba8184fa15e4b55946874e6f550fdb94f2645fe0fe5f83cc5c1445b"
	I1025 10:26:08.552793  454182 cri.go:89] found id: "f988c3e30953fb9f8538ab5b9751389b5fea271fe794156bb9301f708ecaac3a"
	I1025 10:26:08.552796  454182 cri.go:89] found id: "40f252c2f93511d0cf5758ba10d102806b290bc96aa42501a78a329b693e52c4"
	I1025 10:26:08.552800  454182 cri.go:89] found id: ""
	I1025 10:26:08.552850  454182 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 10:26:08.575120  454182 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:26:08Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:26:08.575271  454182 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:26:08.636238  454182 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:26:08.636259  454182 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:26:08.636313  454182 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:26:08.674174  454182 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:26:08.674789  454182 kubeconfig.go:125] found "pause-598105" server: "https://192.168.85.2:8443"
	I1025 10:26:08.675628  454182 kapi.go:59] client config for pause-598105: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21794-292167/.minikube/profiles/pause-598105/client.crt", KeyFile:"/home/jenkins/minikube-integration/21794-292167/.minikube/profiles/pause-598105/client.key", CAFile:"/home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 10:26:08.676465  454182 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1025 10:26:08.676488  454182 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1025 10:26:08.676495  454182 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1025 10:26:08.676501  454182 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1025 10:26:08.676506  454182 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1025 10:26:08.676910  454182 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:26:08.719697  454182 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1025 10:26:08.719731  454182 kubeadm.go:601] duration metric: took 83.465114ms to restartPrimaryControlPlane
	I1025 10:26:08.719740  454182 kubeadm.go:402] duration metric: took 213.152022ms to StartCluster
	I1025 10:26:08.719756  454182 settings.go:142] acquiring lock: {Name:mkea67a33832f2ea491a4c745ccd836174c61655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:26:08.719820  454182 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:26:08.720691  454182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/kubeconfig: {Name:mk3dba620aff9c31a3afd43d9db4d9ff5be75367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:26:08.720913  454182 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:26:08.721205  454182 config.go:182] Loaded profile config "pause-598105": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:26:08.721252  454182 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:26:08.724449  454182 out.go:179] * Verifying Kubernetes components...
	I1025 10:26:08.724540  454182 out.go:179] * Enabled addons: 
	I1025 10:26:05.094924  438892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:26:05.105739  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:26:05.105810  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:26:05.132770  438892 cri.go:89] found id: ""
	I1025 10:26:05.132793  438892 logs.go:282] 0 containers: []
	W1025 10:26:05.132801  438892 logs.go:284] No container was found matching "kube-apiserver"
	I1025 10:26:05.132809  438892 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:26:05.132872  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:26:05.189682  438892 cri.go:89] found id: ""
	I1025 10:26:05.189704  438892 logs.go:282] 0 containers: []
	W1025 10:26:05.189715  438892 logs.go:284] No container was found matching "etcd"
	I1025 10:26:05.189722  438892 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:26:05.189780  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:26:05.234293  438892 cri.go:89] found id: ""
	I1025 10:26:05.234316  438892 logs.go:282] 0 containers: []
	W1025 10:26:05.234324  438892 logs.go:284] No container was found matching "coredns"
	I1025 10:26:05.234331  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:26:05.234387  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:26:05.273455  438892 cri.go:89] found id: ""
	I1025 10:26:05.273477  438892 logs.go:282] 0 containers: []
	W1025 10:26:05.273486  438892 logs.go:284] No container was found matching "kube-scheduler"
	I1025 10:26:05.273493  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:26:05.273550  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:26:05.309427  438892 cri.go:89] found id: ""
	I1025 10:26:05.309451  438892 logs.go:282] 0 containers: []
	W1025 10:26:05.309464  438892 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:26:05.309471  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:26:05.309535  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:26:05.344324  438892 cri.go:89] found id: ""
	I1025 10:26:05.344355  438892 logs.go:282] 0 containers: []
	W1025 10:26:05.344365  438892 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 10:26:05.344372  438892 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:26:05.344432  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:26:05.395716  438892 cri.go:89] found id: ""
	I1025 10:26:05.395740  438892 logs.go:282] 0 containers: []
	W1025 10:26:05.395749  438892 logs.go:284] No container was found matching "kindnet"
	I1025 10:26:05.395757  438892 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:26:05.395813  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:26:05.430107  438892 cri.go:89] found id: ""
	I1025 10:26:05.430129  438892 logs.go:282] 0 containers: []
	W1025 10:26:05.430137  438892 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:26:05.430146  438892 logs.go:123] Gathering logs for kubelet ...
	I1025 10:26:05.430157  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:26:05.565974  438892 logs.go:123] Gathering logs for dmesg ...
	I1025 10:26:05.566012  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:26:05.585008  438892 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:26:05.585040  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:26:05.666792  438892 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:26:05.666824  438892 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:26:05.666836  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:26:05.713065  438892 logs.go:123] Gathering logs for container status ...
	I1025 10:26:05.713102  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 10:26:08.269252  438892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:26:08.287057  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:26:08.287129  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:26:08.341333  438892 cri.go:89] found id: ""
	I1025 10:26:08.341360  438892 logs.go:282] 0 containers: []
	W1025 10:26:08.341369  438892 logs.go:284] No container was found matching "kube-apiserver"
	I1025 10:26:08.341376  438892 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:26:08.341432  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:26:08.380433  438892 cri.go:89] found id: ""
	I1025 10:26:08.380455  438892 logs.go:282] 0 containers: []
	W1025 10:26:08.380463  438892 logs.go:284] No container was found matching "etcd"
	I1025 10:26:08.380470  438892 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:26:08.380531  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:26:08.423944  438892 cri.go:89] found id: ""
	I1025 10:26:08.423972  438892 logs.go:282] 0 containers: []
	W1025 10:26:08.423981  438892 logs.go:284] No container was found matching "coredns"
	I1025 10:26:08.423987  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:26:08.424050  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:26:08.470515  438892 cri.go:89] found id: ""
	I1025 10:26:08.470543  438892 logs.go:282] 0 containers: []
	W1025 10:26:08.470551  438892 logs.go:284] No container was found matching "kube-scheduler"
	I1025 10:26:08.470558  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:26:08.470620  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:26:08.521211  438892 cri.go:89] found id: ""
	I1025 10:26:08.521240  438892 logs.go:282] 0 containers: []
	W1025 10:26:08.521251  438892 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:26:08.521259  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:26:08.521319  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:26:08.565492  438892 cri.go:89] found id: ""
	I1025 10:26:08.565514  438892 logs.go:282] 0 containers: []
	W1025 10:26:08.565524  438892 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 10:26:08.565531  438892 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:26:08.565587  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:26:08.604673  438892 cri.go:89] found id: ""
	I1025 10:26:08.604696  438892 logs.go:282] 0 containers: []
	W1025 10:26:08.604705  438892 logs.go:284] No container was found matching "kindnet"
	I1025 10:26:08.604712  438892 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:26:08.604767  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:26:08.672466  438892 cri.go:89] found id: ""
	I1025 10:26:08.672488  438892 logs.go:282] 0 containers: []
	W1025 10:26:08.672502  438892 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:26:08.672513  438892 logs.go:123] Gathering logs for dmesg ...
	I1025 10:26:08.672526  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:26:08.705144  438892 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:26:08.705173  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:26:08.836525  438892 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:26:08.836543  438892 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:26:08.836557  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:26:08.898412  438892 logs.go:123] Gathering logs for container status ...
	I1025 10:26:08.898492  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 10:26:08.953850  438892 logs.go:123] Gathering logs for kubelet ...
	I1025 10:26:08.953879  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:26:08.727467  454182 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:26:08.727635  454182 addons.go:514] duration metric: took 6.376254ms for enable addons: enabled=[]
	I1025 10:26:09.107711  454182 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:26:09.151766  454182 node_ready.go:35] waiting up to 6m0s for node "pause-598105" to be "Ready" ...
	I1025 10:26:12.762851  454182 node_ready.go:49] node "pause-598105" is "Ready"
	I1025 10:26:12.762883  454182 node_ready.go:38] duration metric: took 3.611071663s for node "pause-598105" to be "Ready" ...
	I1025 10:26:12.762898  454182 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:26:12.762956  454182 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:26:12.782805  454182 api_server.go:72] duration metric: took 4.061856579s to wait for apiserver process to appear ...
	I1025 10:26:12.782828  454182 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:26:12.782847  454182 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 10:26:12.829275  454182 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:26:12.829305  454182 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:26:13.283954  454182 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 10:26:13.292169  454182 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:26:13.292198  454182 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:26:11.650889  438892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:26:11.664251  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:26:11.664324  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:26:11.705607  438892 cri.go:89] found id: ""
	I1025 10:26:11.705646  438892 logs.go:282] 0 containers: []
	W1025 10:26:11.705656  438892 logs.go:284] No container was found matching "kube-apiserver"
	I1025 10:26:11.705663  438892 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:26:11.705726  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:26:11.765578  438892 cri.go:89] found id: ""
	I1025 10:26:11.765606  438892 logs.go:282] 0 containers: []
	W1025 10:26:11.765616  438892 logs.go:284] No container was found matching "etcd"
	I1025 10:26:11.765622  438892 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:26:11.765714  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:26:11.813395  438892 cri.go:89] found id: ""
	I1025 10:26:11.813425  438892 logs.go:282] 0 containers: []
	W1025 10:26:11.813434  438892 logs.go:284] No container was found matching "coredns"
	I1025 10:26:11.813441  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:26:11.813500  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:26:11.862668  438892 cri.go:89] found id: ""
	I1025 10:26:11.862696  438892 logs.go:282] 0 containers: []
	W1025 10:26:11.862706  438892 logs.go:284] No container was found matching "kube-scheduler"
	I1025 10:26:11.862712  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:26:11.862772  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:26:11.908455  438892 cri.go:89] found id: ""
	I1025 10:26:11.908491  438892 logs.go:282] 0 containers: []
	W1025 10:26:11.908501  438892 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:26:11.908508  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:26:11.908578  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:26:11.973748  438892 cri.go:89] found id: ""
	I1025 10:26:11.973776  438892 logs.go:282] 0 containers: []
	W1025 10:26:11.973793  438892 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 10:26:11.973800  438892 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:26:11.973868  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:26:12.030510  438892 cri.go:89] found id: ""
	I1025 10:26:12.030548  438892 logs.go:282] 0 containers: []
	W1025 10:26:12.030557  438892 logs.go:284] No container was found matching "kindnet"
	I1025 10:26:12.030565  438892 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:26:12.030636  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:26:12.070253  438892 cri.go:89] found id: ""
	I1025 10:26:12.070282  438892 logs.go:282] 0 containers: []
	W1025 10:26:12.070300  438892 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:26:12.070309  438892 logs.go:123] Gathering logs for kubelet ...
	I1025 10:26:12.070321  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:26:12.208728  438892 logs.go:123] Gathering logs for dmesg ...
	I1025 10:26:12.208766  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:26:12.225009  438892 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:26:12.225041  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:26:12.332313  438892 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:26:12.332347  438892 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:26:12.332360  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:26:12.370675  438892 logs.go:123] Gathering logs for container status ...
	I1025 10:26:12.370716  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 10:26:13.783005  454182 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 10:26:13.791319  454182 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1025 10:26:13.792479  454182 api_server.go:141] control plane version: v1.34.1
	I1025 10:26:13.792505  454182 api_server.go:131] duration metric: took 1.009669596s to wait for apiserver health ...
	I1025 10:26:13.792516  454182 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:26:13.795473  454182 system_pods.go:59] 7 kube-system pods found
	I1025 10:26:13.795515  454182 system_pods.go:61] "coredns-66bc5c9577-mwxxc" [5aef0e38-29c2-4dbc-b75d-96b9454113b4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:26:13.795551  454182 system_pods.go:61] "etcd-pause-598105" [5dab1e0a-ec54-4063-807f-d9eb06f2d9b7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:26:13.795566  454182 system_pods.go:61] "kindnet-x2zhm" [183c57f8-d19b-4e10-b018-d0518418dc4e] Running
	I1025 10:26:13.795575  454182 system_pods.go:61] "kube-apiserver-pause-598105" [3c26c8e4-11aa-4c5a-8b0e-7fdbd046f314] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:26:13.795588  454182 system_pods.go:61] "kube-controller-manager-pause-598105" [ca780a17-ece9-4aed-aae6-59a73940fcd4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:26:13.795594  454182 system_pods.go:61] "kube-proxy-gg7cn" [8afd0eec-98cc-4d94-ac83-e6734161aea0] Running
	I1025 10:26:13.795607  454182 system_pods.go:61] "kube-scheduler-pause-598105" [be0d2fb9-d287-48c8-8d9d-9e9f20a30f13] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:26:13.795637  454182 system_pods.go:74] duration metric: took 3.113345ms to wait for pod list to return data ...
	I1025 10:26:13.795662  454182 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:26:13.798527  454182 default_sa.go:45] found service account: "default"
	I1025 10:26:13.798593  454182 default_sa.go:55] duration metric: took 2.922778ms for default service account to be created ...
	I1025 10:26:13.798609  454182 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:26:13.801439  454182 system_pods.go:86] 7 kube-system pods found
	I1025 10:26:13.801475  454182 system_pods.go:89] "coredns-66bc5c9577-mwxxc" [5aef0e38-29c2-4dbc-b75d-96b9454113b4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:26:13.801485  454182 system_pods.go:89] "etcd-pause-598105" [5dab1e0a-ec54-4063-807f-d9eb06f2d9b7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:26:13.801490  454182 system_pods.go:89] "kindnet-x2zhm" [183c57f8-d19b-4e10-b018-d0518418dc4e] Running
	I1025 10:26:13.801497  454182 system_pods.go:89] "kube-apiserver-pause-598105" [3c26c8e4-11aa-4c5a-8b0e-7fdbd046f314] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:26:13.801504  454182 system_pods.go:89] "kube-controller-manager-pause-598105" [ca780a17-ece9-4aed-aae6-59a73940fcd4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:26:13.801513  454182 system_pods.go:89] "kube-proxy-gg7cn" [8afd0eec-98cc-4d94-ac83-e6734161aea0] Running
	I1025 10:26:13.801520  454182 system_pods.go:89] "kube-scheduler-pause-598105" [be0d2fb9-d287-48c8-8d9d-9e9f20a30f13] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:26:13.801540  454182 system_pods.go:126] duration metric: took 2.924731ms to wait for k8s-apps to be running ...
	I1025 10:26:13.801548  454182 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:26:13.801603  454182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:26:13.814689  454182 system_svc.go:56] duration metric: took 13.131721ms WaitForService to wait for kubelet
	I1025 10:26:13.814720  454182 kubeadm.go:586] duration metric: took 5.093774904s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:26:13.814739  454182 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:26:13.817763  454182 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:26:13.817799  454182 node_conditions.go:123] node cpu capacity is 2
	I1025 10:26:13.817812  454182 node_conditions.go:105] duration metric: took 3.06774ms to run NodePressure ...
	I1025 10:26:13.817825  454182 start.go:241] waiting for startup goroutines ...
	I1025 10:26:13.817833  454182 start.go:246] waiting for cluster config update ...
	I1025 10:26:13.817842  454182 start.go:255] writing updated cluster config ...
	I1025 10:26:13.818198  454182 ssh_runner.go:195] Run: rm -f paused
	I1025 10:26:13.821830  454182 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:26:13.822507  454182 kapi.go:59] client config for pause-598105: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21794-292167/.minikube/profiles/pause-598105/client.crt", KeyFile:"/home/jenkins/minikube-integration/21794-292167/.minikube/profiles/pause-598105/client.key", CAFile:"/home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1025 10:26:13.826310  454182 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mwxxc" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 10:26:15.832723  454182 pod_ready.go:104] pod "coredns-66bc5c9577-mwxxc" is not "Ready", error: <nil>
	W1025 10:26:18.340789  454182 pod_ready.go:104] pod "coredns-66bc5c9577-mwxxc" is not "Ready", error: <nil>
	I1025 10:26:14.939620  438892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:26:14.949980  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:26:14.950069  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:26:14.980983  438892 cri.go:89] found id: ""
	I1025 10:26:14.981019  438892 logs.go:282] 0 containers: []
	W1025 10:26:14.981028  438892 logs.go:284] No container was found matching "kube-apiserver"
	I1025 10:26:14.981035  438892 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:26:14.981096  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:26:15.010768  438892 cri.go:89] found id: ""
	I1025 10:26:15.011228  438892 logs.go:282] 0 containers: []
	W1025 10:26:15.011282  438892 logs.go:284] No container was found matching "etcd"
	I1025 10:26:15.011311  438892 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:26:15.011450  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:26:15.055501  438892 cri.go:89] found id: ""
	I1025 10:26:15.055589  438892 logs.go:282] 0 containers: []
	W1025 10:26:15.055615  438892 logs.go:284] No container was found matching "coredns"
	I1025 10:26:15.055645  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:26:15.055741  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:26:15.088984  438892 cri.go:89] found id: ""
	I1025 10:26:15.089013  438892 logs.go:282] 0 containers: []
	W1025 10:26:15.089024  438892 logs.go:284] No container was found matching "kube-scheduler"
	I1025 10:26:15.089031  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:26:15.089093  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:26:15.117598  438892 cri.go:89] found id: ""
	I1025 10:26:15.117621  438892 logs.go:282] 0 containers: []
	W1025 10:26:15.117629  438892 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:26:15.117636  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:26:15.117700  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:26:15.146587  438892 cri.go:89] found id: ""
	I1025 10:26:15.146617  438892 logs.go:282] 0 containers: []
	W1025 10:26:15.146626  438892 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 10:26:15.146634  438892 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:26:15.146696  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:26:15.173772  438892 cri.go:89] found id: ""
	I1025 10:26:15.173841  438892 logs.go:282] 0 containers: []
	W1025 10:26:15.173864  438892 logs.go:284] No container was found matching "kindnet"
	I1025 10:26:15.173887  438892 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:26:15.173976  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:26:15.200614  438892 cri.go:89] found id: ""
	I1025 10:26:15.200640  438892 logs.go:282] 0 containers: []
	W1025 10:26:15.200660  438892 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:26:15.200670  438892 logs.go:123] Gathering logs for kubelet ...
	I1025 10:26:15.200682  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:26:15.321323  438892 logs.go:123] Gathering logs for dmesg ...
	I1025 10:26:15.321357  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:26:15.340245  438892 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:26:15.340276  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:26:15.420172  438892 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:26:15.420265  438892 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:26:15.420287  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:26:15.459593  438892 logs.go:123] Gathering logs for container status ...
	I1025 10:26:15.459627  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 10:26:17.996375  438892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:26:18.008899  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:26:18.008990  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:26:18.040562  438892 cri.go:89] found id: ""
	I1025 10:26:18.040595  438892 logs.go:282] 0 containers: []
	W1025 10:26:18.040605  438892 logs.go:284] No container was found matching "kube-apiserver"
	I1025 10:26:18.040612  438892 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:26:18.040676  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:26:18.067943  438892 cri.go:89] found id: ""
	I1025 10:26:18.067970  438892 logs.go:282] 0 containers: []
	W1025 10:26:18.067981  438892 logs.go:284] No container was found matching "etcd"
	I1025 10:26:18.067988  438892 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:26:18.068046  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:26:18.094973  438892 cri.go:89] found id: ""
	I1025 10:26:18.094998  438892 logs.go:282] 0 containers: []
	W1025 10:26:18.095007  438892 logs.go:284] No container was found matching "coredns"
	I1025 10:26:18.095014  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:26:18.095069  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:26:18.121106  438892 cri.go:89] found id: ""
	I1025 10:26:18.121132  438892 logs.go:282] 0 containers: []
	W1025 10:26:18.121141  438892 logs.go:284] No container was found matching "kube-scheduler"
	I1025 10:26:18.121148  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:26:18.121211  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:26:18.147539  438892 cri.go:89] found id: ""
	I1025 10:26:18.147564  438892 logs.go:282] 0 containers: []
	W1025 10:26:18.147572  438892 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:26:18.147579  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:26:18.147635  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:26:18.174017  438892 cri.go:89] found id: ""
	I1025 10:26:18.174038  438892 logs.go:282] 0 containers: []
	W1025 10:26:18.174047  438892 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 10:26:18.174054  438892 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:26:18.174115  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:26:18.200055  438892 cri.go:89] found id: ""
	I1025 10:26:18.200082  438892 logs.go:282] 0 containers: []
	W1025 10:26:18.200099  438892 logs.go:284] No container was found matching "kindnet"
	I1025 10:26:18.200106  438892 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:26:18.200189  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:26:18.229040  438892 cri.go:89] found id: ""
	I1025 10:26:18.229065  438892 logs.go:282] 0 containers: []
	W1025 10:26:18.229075  438892 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:26:18.229084  438892 logs.go:123] Gathering logs for kubelet ...
	I1025 10:26:18.229112  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:26:18.353467  438892 logs.go:123] Gathering logs for dmesg ...
	I1025 10:26:18.353509  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:26:18.372404  438892 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:26:18.372435  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:26:18.441975  438892 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:26:18.441997  438892 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:26:18.442011  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:26:18.479846  438892 logs.go:123] Gathering logs for container status ...
	I1025 10:26:18.479881  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 10:26:19.838624  454182 pod_ready.go:94] pod "coredns-66bc5c9577-mwxxc" is "Ready"
	I1025 10:26:19.838709  454182 pod_ready.go:86] duration metric: took 6.012373986s for pod "coredns-66bc5c9577-mwxxc" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:26:19.842098  454182 pod_ready.go:83] waiting for pod "etcd-pause-598105" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:26:19.852961  454182 pod_ready.go:94] pod "etcd-pause-598105" is "Ready"
	I1025 10:26:19.852990  454182 pod_ready.go:86] duration metric: took 10.854195ms for pod "etcd-pause-598105" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:26:19.942740  454182 pod_ready.go:83] waiting for pod "kube-apiserver-pause-598105" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 10:26:21.947633  454182 pod_ready.go:104] pod "kube-apiserver-pause-598105" is not "Ready", error: <nil>
	I1025 10:26:21.011635  438892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:26:21.022296  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:26:21.022364  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:26:21.048627  438892 cri.go:89] found id: ""
	I1025 10:26:21.048650  438892 logs.go:282] 0 containers: []
	W1025 10:26:21.048659  438892 logs.go:284] No container was found matching "kube-apiserver"
	I1025 10:26:21.048666  438892 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:26:21.048727  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:26:21.079027  438892 cri.go:89] found id: ""
	I1025 10:26:21.079053  438892 logs.go:282] 0 containers: []
	W1025 10:26:21.079061  438892 logs.go:284] No container was found matching "etcd"
	I1025 10:26:21.079068  438892 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:26:21.079126  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:26:21.104193  438892 cri.go:89] found id: ""
	I1025 10:26:21.104217  438892 logs.go:282] 0 containers: []
	W1025 10:26:21.104225  438892 logs.go:284] No container was found matching "coredns"
	I1025 10:26:21.104232  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:26:21.104295  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:26:21.130491  438892 cri.go:89] found id: ""
	I1025 10:26:21.130514  438892 logs.go:282] 0 containers: []
	W1025 10:26:21.130522  438892 logs.go:284] No container was found matching "kube-scheduler"
	I1025 10:26:21.130529  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:26:21.130588  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:26:21.161651  438892 cri.go:89] found id: ""
	I1025 10:26:21.161674  438892 logs.go:282] 0 containers: []
	W1025 10:26:21.161681  438892 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:26:21.161687  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:26:21.161744  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:26:21.190105  438892 cri.go:89] found id: ""
	I1025 10:26:21.190127  438892 logs.go:282] 0 containers: []
	W1025 10:26:21.190136  438892 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 10:26:21.190142  438892 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:26:21.190203  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:26:21.216258  438892 cri.go:89] found id: ""
	I1025 10:26:21.216288  438892 logs.go:282] 0 containers: []
	W1025 10:26:21.216297  438892 logs.go:284] No container was found matching "kindnet"
	I1025 10:26:21.216304  438892 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:26:21.216361  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:26:21.246540  438892 cri.go:89] found id: ""
	I1025 10:26:21.246564  438892 logs.go:282] 0 containers: []
	W1025 10:26:21.246573  438892 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:26:21.246582  438892 logs.go:123] Gathering logs for container status ...
	I1025 10:26:21.246594  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1025 10:26:21.278793  438892 logs.go:123] Gathering logs for kubelet ...
	I1025 10:26:21.278822  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:26:21.400271  438892 logs.go:123] Gathering logs for dmesg ...
	I1025 10:26:21.400313  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:26:21.417089  438892 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:26:21.417117  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:26:21.496759  438892 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:26:21.496781  438892 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:26:21.496796  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:26:24.038887  438892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:26:24.049530  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:26:24.049598  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:26:24.076037  438892 cri.go:89] found id: ""
	I1025 10:26:24.076062  438892 logs.go:282] 0 containers: []
	W1025 10:26:24.076072  438892 logs.go:284] No container was found matching "kube-apiserver"
	I1025 10:26:24.076079  438892 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:26:24.076142  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:26:24.102394  438892 cri.go:89] found id: ""
	I1025 10:26:24.102420  438892 logs.go:282] 0 containers: []
	W1025 10:26:24.102437  438892 logs.go:284] No container was found matching "etcd"
	I1025 10:26:24.102460  438892 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:26:24.102555  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:26:24.128449  438892 cri.go:89] found id: ""
	I1025 10:26:24.128473  438892 logs.go:282] 0 containers: []
	W1025 10:26:24.128481  438892 logs.go:284] No container was found matching "coredns"
	I1025 10:26:24.128494  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:26:24.128575  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:26:24.158189  438892 cri.go:89] found id: ""
	I1025 10:26:24.158216  438892 logs.go:282] 0 containers: []
	W1025 10:26:24.158237  438892 logs.go:284] No container was found matching "kube-scheduler"
	I1025 10:26:24.158244  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:26:24.158340  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:26:24.184119  438892 cri.go:89] found id: ""
	I1025 10:26:24.184142  438892 logs.go:282] 0 containers: []
	W1025 10:26:24.184157  438892 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:26:24.184164  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:26:24.184225  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:26:24.209733  438892 cri.go:89] found id: ""
	I1025 10:26:24.209808  438892 logs.go:282] 0 containers: []
	W1025 10:26:24.209840  438892 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 10:26:24.209861  438892 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:26:24.209949  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:26:24.235401  438892 cri.go:89] found id: ""
	I1025 10:26:24.235482  438892 logs.go:282] 0 containers: []
	W1025 10:26:24.235514  438892 logs.go:284] No container was found matching "kindnet"
	I1025 10:26:24.235535  438892 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:26:24.235625  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:26:24.261927  438892 cri.go:89] found id: ""
	I1025 10:26:24.262005  438892 logs.go:282] 0 containers: []
	W1025 10:26:24.262027  438892 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:26:24.262052  438892 logs.go:123] Gathering logs for kubelet ...
	I1025 10:26:24.262088  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:26:24.380449  438892 logs.go:123] Gathering logs for dmesg ...
	I1025 10:26:24.380484  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:26:24.397434  438892 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:26:24.397462  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:26:24.475555  438892 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:26:24.475630  438892 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:26:24.475652  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:26:24.516468  438892 logs.go:123] Gathering logs for container status ...
	I1025 10:26:24.516509  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1025 10:26:23.948161  454182 pod_ready.go:104] pod "kube-apiserver-pause-598105" is not "Ready", error: <nil>
	I1025 10:26:25.950663  454182 pod_ready.go:94] pod "kube-apiserver-pause-598105" is "Ready"
	I1025 10:26:25.950693  454182 pod_ready.go:86] duration metric: took 6.007915697s for pod "kube-apiserver-pause-598105" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:26:25.955830  454182 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-598105" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:26:25.961412  454182 pod_ready.go:94] pod "kube-controller-manager-pause-598105" is "Ready"
	I1025 10:26:25.961436  454182 pod_ready.go:86] duration metric: took 5.577549ms for pod "kube-controller-manager-pause-598105" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:26:25.963793  454182 pod_ready.go:83] waiting for pod "kube-proxy-gg7cn" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:26:25.967983  454182 pod_ready.go:94] pod "kube-proxy-gg7cn" is "Ready"
	I1025 10:26:25.968009  454182 pod_ready.go:86] duration metric: took 4.190692ms for pod "kube-proxy-gg7cn" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:26:25.970126  454182 pod_ready.go:83] waiting for pod "kube-scheduler-pause-598105" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:26:26.147596  454182 pod_ready.go:94] pod "kube-scheduler-pause-598105" is "Ready"
	I1025 10:26:26.147624  454182 pod_ready.go:86] duration metric: took 177.473464ms for pod "kube-scheduler-pause-598105" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:26:26.147638  454182 pod_ready.go:40] duration metric: took 12.325735201s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:26:26.198871  454182 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 10:26:26.201904  454182 out.go:179] * Done! kubectl is now configured to use "pause-598105" cluster and "default" namespace by default
	I1025 10:26:27.055327  438892 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:26:27.066089  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1025 10:26:27.066162  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1025 10:26:27.092560  438892 cri.go:89] found id: ""
	I1025 10:26:27.092586  438892 logs.go:282] 0 containers: []
	W1025 10:26:27.092595  438892 logs.go:284] No container was found matching "kube-apiserver"
	I1025 10:26:27.092603  438892 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1025 10:26:27.092663  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1025 10:26:27.118702  438892 cri.go:89] found id: ""
	I1025 10:26:27.118727  438892 logs.go:282] 0 containers: []
	W1025 10:26:27.118736  438892 logs.go:284] No container was found matching "etcd"
	I1025 10:26:27.118742  438892 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1025 10:26:27.118801  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1025 10:26:27.145028  438892 cri.go:89] found id: ""
	I1025 10:26:27.145052  438892 logs.go:282] 0 containers: []
	W1025 10:26:27.145061  438892 logs.go:284] No container was found matching "coredns"
	I1025 10:26:27.145067  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1025 10:26:27.145126  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1025 10:26:27.169882  438892 cri.go:89] found id: ""
	I1025 10:26:27.169908  438892 logs.go:282] 0 containers: []
	W1025 10:26:27.169917  438892 logs.go:284] No container was found matching "kube-scheduler"
	I1025 10:26:27.169924  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1025 10:26:27.169979  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1025 10:26:27.206264  438892 cri.go:89] found id: ""
	I1025 10:26:27.206287  438892 logs.go:282] 0 containers: []
	W1025 10:26:27.206296  438892 logs.go:284] No container was found matching "kube-proxy"
	I1025 10:26:27.206303  438892 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1025 10:26:27.206360  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1025 10:26:27.237541  438892 cri.go:89] found id: ""
	I1025 10:26:27.237567  438892 logs.go:282] 0 containers: []
	W1025 10:26:27.237577  438892 logs.go:284] No container was found matching "kube-controller-manager"
	I1025 10:26:27.237584  438892 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1025 10:26:27.237643  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1025 10:26:27.274400  438892 cri.go:89] found id: ""
	I1025 10:26:27.274425  438892 logs.go:282] 0 containers: []
	W1025 10:26:27.274434  438892 logs.go:284] No container was found matching "kindnet"
	I1025 10:26:27.274441  438892 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1025 10:26:27.274496  438892 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1025 10:26:27.316922  438892 cri.go:89] found id: ""
	I1025 10:26:27.316950  438892 logs.go:282] 0 containers: []
	W1025 10:26:27.316959  438892 logs.go:284] No container was found matching "storage-provisioner"
	I1025 10:26:27.316969  438892 logs.go:123] Gathering logs for kubelet ...
	I1025 10:26:27.316981  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1025 10:26:27.455070  438892 logs.go:123] Gathering logs for dmesg ...
	I1025 10:26:27.455115  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1025 10:26:27.473545  438892 logs.go:123] Gathering logs for describe nodes ...
	I1025 10:26:27.473576  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1025 10:26:27.549389  438892 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1025 10:26:27.549410  438892 logs.go:123] Gathering logs for CRI-O ...
	I1025 10:26:27.549422  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1025 10:26:27.586717  438892 logs.go:123] Gathering logs for container status ...
	I1025 10:26:27.586750  438892 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	
	
	==> CRI-O <==
	Oct 25 10:26:07 pause-598105 crio[2053]: time="2025-10-25T10:26:07.823569575Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:26:07 pause-598105 crio[2053]: time="2025-10-25T10:26:07.829117454Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:26:07 pause-598105 crio[2053]: time="2025-10-25T10:26:07.840388196Z" level=info msg="Started container" PID=2285 containerID=bf9ced8f087cd758796eeb7bbb83271e0303a80f6596c05bec23f83162420ca2 description=kube-system/etcd-pause-598105/etcd id=83b4b790-8a7e-44a2-99a6-558b44bd4b78 name=/runtime.v1.RuntimeService/StartContainer sandboxID=25c98b76265a5863ddf43b3f25089c5d8745e2fa7946a963f98679031b53c44c
	Oct 25 10:26:07 pause-598105 crio[2053]: time="2025-10-25T10:26:07.863723461Z" level=info msg="Started container" PID=2269 containerID=12e15ac9eaae3cc1b3ada04843bd01215519f12759752badde06818b40df81d2 description=kube-system/kube-apiserver-pause-598105/kube-apiserver id=a8fb0d59-360a-418e-9bd8-3990407c34a3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=72a7523c9bda4427ce16fe00a5961e29a42f9a4660fe7d97cc661657d937ee31
	Oct 25 10:26:07 pause-598105 crio[2053]: time="2025-10-25T10:26:07.878066226Z" level=info msg="Created container 7a7371c136a2c320fe9995dc5d7d2acffc0c01af23cb2964be84fe31334bb21f: kube-system/kube-scheduler-pause-598105/kube-scheduler" id=ff264a87-8ffa-44dc-a2bb-51574fc61cf9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:26:07 pause-598105 crio[2053]: time="2025-10-25T10:26:07.888852843Z" level=info msg="Created container 54304becc9c9b39308914a5ecf29a26df5b8f4c7e53eb4b23245f4903e1707bd: kube-system/kube-controller-manager-pause-598105/kube-controller-manager" id=cf6e892d-f6ed-4b4f-b127-6bbef87fcf7b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:26:07 pause-598105 crio[2053]: time="2025-10-25T10:26:07.889428555Z" level=info msg="Starting container: 54304becc9c9b39308914a5ecf29a26df5b8f4c7e53eb4b23245f4903e1707bd" id=e622557d-47c5-41b5-bbb1-d8a2bd5df6bd name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:26:07 pause-598105 crio[2053]: time="2025-10-25T10:26:07.890219679Z" level=info msg="Starting container: 7a7371c136a2c320fe9995dc5d7d2acffc0c01af23cb2964be84fe31334bb21f" id=de2ff924-63d5-4c19-b731-29351505fffc name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:26:07 pause-598105 crio[2053]: time="2025-10-25T10:26:07.891349671Z" level=info msg="Started container" PID=2294 containerID=54304becc9c9b39308914a5ecf29a26df5b8f4c7e53eb4b23245f4903e1707bd description=kube-system/kube-controller-manager-pause-598105/kube-controller-manager id=e622557d-47c5-41b5-bbb1-d8a2bd5df6bd name=/runtime.v1.RuntimeService/StartContainer sandboxID=9ff2fd1e0fa6ec15391afa3f4a51fd19a0423c201fa7dd08396eac47e88b2576
	Oct 25 10:26:07 pause-598105 crio[2053]: time="2025-10-25T10:26:07.900270786Z" level=info msg="Started container" PID=2290 containerID=7a7371c136a2c320fe9995dc5d7d2acffc0c01af23cb2964be84fe31334bb21f description=kube-system/kube-scheduler-pause-598105/kube-scheduler id=de2ff924-63d5-4c19-b731-29351505fffc name=/runtime.v1.RuntimeService/StartContainer sandboxID=7a3b9c4ee6539819850a1d206dc5af78096846532ff976e21d6c8d4de6c99ce5
	Oct 25 10:26:08 pause-598105 crio[2053]: time="2025-10-25T10:26:08.286183953Z" level=info msg="Created container e19bbf1f4eed5c911d637106fa9f035c92b7e0210f15222f9a9123d09622c90b: kube-system/kube-proxy-gg7cn/kube-proxy" id=d2f9d6c3-ab10-4576-a29e-6c5c32b6f5db name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:26:08 pause-598105 crio[2053]: time="2025-10-25T10:26:08.288612242Z" level=info msg="Starting container: e19bbf1f4eed5c911d637106fa9f035c92b7e0210f15222f9a9123d09622c90b" id=e7c62927-ed2f-480d-90e6-4d35bd85561a name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:26:08 pause-598105 crio[2053]: time="2025-10-25T10:26:08.291621339Z" level=info msg="Started container" PID=2321 containerID=e19bbf1f4eed5c911d637106fa9f035c92b7e0210f15222f9a9123d09622c90b description=kube-system/kube-proxy-gg7cn/kube-proxy id=e7c62927-ed2f-480d-90e6-4d35bd85561a name=/runtime.v1.RuntimeService/StartContainer sandboxID=76624a180807e360eb6b737e67d1eaf7fe8a9f01233f7c7b4fd2e2247fa3662c
	Oct 25 10:26:18 pause-598105 crio[2053]: time="2025-10-25T10:26:18.006672813Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:26:18 pause-598105 crio[2053]: time="2025-10-25T10:26:18.013927306Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:26:18 pause-598105 crio[2053]: time="2025-10-25T10:26:18.014166768Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:26:18 pause-598105 crio[2053]: time="2025-10-25T10:26:18.014311516Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:26:18 pause-598105 crio[2053]: time="2025-10-25T10:26:18.022203671Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:26:18 pause-598105 crio[2053]: time="2025-10-25T10:26:18.022602404Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:26:18 pause-598105 crio[2053]: time="2025-10-25T10:26:18.022779473Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:26:18 pause-598105 crio[2053]: time="2025-10-25T10:26:18.028327105Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:26:18 pause-598105 crio[2053]: time="2025-10-25T10:26:18.028563818Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:26:18 pause-598105 crio[2053]: time="2025-10-25T10:26:18.02869968Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:26:18 pause-598105 crio[2053]: time="2025-10-25T10:26:18.033655576Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:26:18 pause-598105 crio[2053]: time="2025-10-25T10:26:18.033881974Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	e19bbf1f4eed5       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   24 seconds ago       Running             kube-proxy                1                   76624a180807e       kube-proxy-gg7cn                       kube-system
	54304becc9c9b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   24 seconds ago       Running             kube-controller-manager   1                   9ff2fd1e0fa6e       kube-controller-manager-pause-598105   kube-system
	7a7371c136a2c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   24 seconds ago       Running             kube-scheduler            1                   7a3b9c4ee6539       kube-scheduler-pause-598105            kube-system
	bf9ced8f087cd       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   24 seconds ago       Running             etcd                      1                   25c98b76265a5       etcd-pause-598105                      kube-system
	12e15ac9eaae3       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   24 seconds ago       Running             kube-apiserver            1                   72a7523c9bda4       kube-apiserver-pause-598105            kube-system
	140627d5948e8       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   24 seconds ago       Running             coredns                   1                   078c674624b0b       coredns-66bc5c9577-mwxxc               kube-system
	567f9d3b15fc7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   24 seconds ago       Running             kindnet-cni               1                   ebdae3f24c8c2       kindnet-x2zhm                          kube-system
	c76e4fa5a72c2       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   36 seconds ago       Exited              coredns                   0                   078c674624b0b       coredns-66bc5c9577-mwxxc               kube-system
	a687e0761f69e       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   76624a180807e       kube-proxy-gg7cn                       kube-system
	f8ac7113586ad       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   ebdae3f24c8c2       kindnet-x2zhm                          kube-system
	7cfc933f8e740       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   7a3b9c4ee6539       kube-scheduler-pause-598105            kube-system
	c030a64bfba81       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   72a7523c9bda4       kube-apiserver-pause-598105            kube-system
	f988c3e30953f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   9ff2fd1e0fa6e       kube-controller-manager-pause-598105   kube-system
	40f252c2f9351       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   25c98b76265a5       etcd-pause-598105                      kube-system
	
	
	==> coredns [140627d5948e8c0ffd2c884620b8491eace7b98a1096b00a577032f0de164d71] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60554 - 27643 "HINFO IN 2208468797763737038.1216825629007552775. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03390533s
	
	
	==> coredns [c76e4fa5a72c206c0ecc32c7257efd336d9e589076154db2e02e1a902dbd47e2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44143 - 1580 "HINFO IN 5747158795120622237.4124721675757847013. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.003715379s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-598105
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-598105
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=pause-598105
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_25_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:25:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-598105
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:26:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:25:55 +0000   Sat, 25 Oct 2025 10:25:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:25:55 +0000   Sat, 25 Oct 2025 10:25:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:25:55 +0000   Sat, 25 Oct 2025 10:25:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:25:55 +0000   Sat, 25 Oct 2025 10:25:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-598105
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                01970c02-0799-474c-af8a-64373f40a4f6
	  Boot ID:                    3729fb0b-b441-4078-a4f7-ae0fb40e9fa4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-mwxxc                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     78s
	  kube-system                 etcd-pause-598105                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         85s
	  kube-system                 kindnet-x2zhm                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      79s
	  kube-system                 kube-apiserver-pause-598105             250m (12%)    0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-controller-manager-pause-598105    200m (10%)    0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-proxy-gg7cn                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-scheduler-pause-598105             100m (5%)     0 (0%)      0 (0%)           0 (0%)         84s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 76s   kube-proxy       
	  Normal   Starting                 19s   kube-proxy       
	  Normal   Starting                 84s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 84s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  84s   kubelet          Node pause-598105 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    84s   kubelet          Node pause-598105 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     84s   kubelet          Node pause-598105 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           80s   node-controller  Node pause-598105 event: Registered Node pause-598105 in Controller
	  Normal   NodeReady                37s   kubelet          Node pause-598105 status is now: NodeReady
	  Normal   RegisteredNode           16s   node-controller  Node pause-598105 event: Registered Node pause-598105 in Controller
	
	
	==> dmesg <==
	[Oct25 10:00] overlayfs: idmapped layers are currently not supported
	[Oct25 10:01] overlayfs: idmapped layers are currently not supported
	[Oct25 10:02] overlayfs: idmapped layers are currently not supported
	[  +3.771525] overlayfs: idmapped layers are currently not supported
	[ +47.892456] overlayfs: idmapped layers are currently not supported
	[Oct25 10:03] overlayfs: idmapped layers are currently not supported
	[Oct25 10:04] overlayfs: idmapped layers are currently not supported
	[Oct25 10:09] overlayfs: idmapped layers are currently not supported
	[ +31.156008] overlayfs: idmapped layers are currently not supported
	[Oct25 10:11] overlayfs: idmapped layers are currently not supported
	[Oct25 10:12] overlayfs: idmapped layers are currently not supported
	[Oct25 10:13] overlayfs: idmapped layers are currently not supported
	[Oct25 10:14] overlayfs: idmapped layers are currently not supported
	[Oct25 10:15] overlayfs: idmapped layers are currently not supported
	[ +16.057450] overlayfs: idmapped layers are currently not supported
	[Oct25 10:16] overlayfs: idmapped layers are currently not supported
	[ +24.917476] overlayfs: idmapped layers are currently not supported
	[Oct25 10:17] overlayfs: idmapped layers are currently not supported
	[ +27.290615] overlayfs: idmapped layers are currently not supported
	[Oct25 10:19] overlayfs: idmapped layers are currently not supported
	[Oct25 10:20] overlayfs: idmapped layers are currently not supported
	[Oct25 10:21] overlayfs: idmapped layers are currently not supported
	[Oct25 10:22] overlayfs: idmapped layers are currently not supported
	[ +31.000692] overlayfs: idmapped layers are currently not supported
	[Oct25 10:24] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [40f252c2f93511d0cf5758ba10d102806b290bc96aa42501a78a329b693e52c4] <==
	{"level":"warn","ts":"2025-10-25T10:25:04.483031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:25:04.540082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:25:04.580262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:25:04.651488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:25:04.761969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:25:04.769509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:25:04.880467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55270","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T10:26:00.135303Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-25T10:26:00.135365Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-598105","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-10-25T10:26:00.135495Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-25T10:26:00.476820Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-25T10:26:00.478370Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-25T10:26:00.478443Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-25T10:26:00.478535Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-25T10:26:00.478560Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-10-25T10:26:00.478565Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"error","ts":"2025-10-25T10:26:00.478569Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-25T10:26:00.478546Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-25T10:26:00.478589Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T10:26:00.478626Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-25T10:26:00.478640Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-25T10:26:00.482109Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-10-25T10:26:00.482202Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-25T10:26:00.482244Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-25T10:26:00.482257Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-598105","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [bf9ced8f087cd758796eeb7bbb83271e0303a80f6596c05bec23f83162420ca2] <==
	{"level":"warn","ts":"2025-10-25T10:26:11.168594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.187353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.212245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.228045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.242776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.265924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.279051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.303008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.313752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.339659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.352489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.373802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.384121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.401621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.426220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.438120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.484979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.496356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.510638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.526231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.549819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.574112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.593898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.615209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:26:11.678346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34846","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:26:32 up  2:09,  0 user,  load average: 1.88, 2.87, 2.60
	Linux pause-598105 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [567f9d3b15fc7f8cdc27eaa6ffd20100bea3644c1a3142d939c47487ce7d1d1d] <==
	I1025 10:26:07.794458       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:26:07.797912       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 10:26:07.798062       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:26:07.798075       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:26:07.798086       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:26:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:26:08.013591       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:26:08.014217       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:26:08.014268       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:26:08.023122       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 10:26:12.923997       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:26:12.924080       1 metrics.go:72] Registering metrics
	I1025 10:26:12.924219       1 controller.go:711] "Syncing nftables rules"
	I1025 10:26:18.001983       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:26:18.002163       1 main.go:301] handling current node
	I1025 10:26:28.003254       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:26:28.003343       1 main.go:301] handling current node
	
	
	==> kindnet [f8ac7113586ad0c3590df29847e589f83cd8305959fe654e545a7ddbbe0b00eb] <==
	I1025 10:25:15.080369       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:25:15.080623       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 10:25:15.080755       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:25:15.080776       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:25:15.080790       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:25:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:25:15.282830       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:25:15.283020       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:25:15.283042       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:25:15.375399       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 10:25:45.283761       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 10:25:45.376809       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 10:25:45.376821       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 10:25:45.376912       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1025 10:25:46.983553       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:25:46.983710       1 metrics.go:72] Registering metrics
	I1025 10:25:46.983854       1 controller.go:711] "Syncing nftables rules"
	I1025 10:25:55.288553       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:25:55.288613       1 main.go:301] handling current node
	
	
	==> kube-apiserver [12e15ac9eaae3cc1b3ada04843bd01215519f12759752badde06818b40df81d2] <==
	I1025 10:26:12.801575       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1025 10:26:12.801963       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1025 10:26:12.802016       1 policy_source.go:240] refreshing policies
	I1025 10:26:12.802149       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 10:26:12.807562       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:26:12.817237       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1025 10:26:12.842935       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 10:26:12.817747       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 10:26:12.824696       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:26:12.824745       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 10:26:12.847296       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 10:26:12.837048       1 aggregator.go:171] initial CRD sync complete...
	I1025 10:26:12.847516       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 10:26:12.847524       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 10:26:12.847536       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:26:12.846114       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 10:26:12.881495       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1025 10:26:12.881616       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 10:26:12.881659       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 10:26:13.510220       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:26:14.736794       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:26:16.137252       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:26:16.338142       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:26:16.388090       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:26:16.537742       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [c030a64bfba8184fa15e4b55946874e6f550fdb94f2645fe0fe5f83cc5c1445b] <==
	W1025 10:26:00.191306       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.191399       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.191497       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.191591       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.191680       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.191773       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.191886       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.191983       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.192084       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.210699       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.210790       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.210846       1 logging.go:55] [core] [Channel #17 SubChannel #21]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.210896       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.210949       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.211007       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.211065       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.211123       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.211264       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.211340       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.215700       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.215893       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.216008       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.216258       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.216411       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1025 10:26:00.218198       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [54304becc9c9b39308914a5ecf29a26df5b8f4c7e53eb4b23245f4903e1707bd] <==
	I1025 10:26:16.132028       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 10:26:16.132062       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1025 10:26:16.132112       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 10:26:16.139378       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 10:26:16.141575       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 10:26:16.144880       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:26:16.149987       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 10:26:16.152256       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 10:26:16.155067       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 10:26:16.155072       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:26:16.157289       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 10:26:16.159541       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 10:26:16.163806       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:26:16.163891       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 10:26:16.163907       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 10:26:16.167939       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 10:26:16.172304       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 10:26:16.175622       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:26:16.180065       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 10:26:16.180175       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 10:26:16.180261       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 10:26:16.180346       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-598105"
	I1025 10:26:16.180393       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 10:26:16.180550       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 10:26:16.180855       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	
	
	==> kube-controller-manager [f988c3e30953fb9f8538ab5b9751389b5fea271fe794156bb9301f708ecaac3a] <==
	I1025 10:25:12.722486       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1025 10:25:12.722541       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1025 10:25:12.722571       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1025 10:25:12.722599       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1025 10:25:12.723268       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:25:12.726611       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 10:25:12.734969       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 10:25:12.740455       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 10:25:12.740713       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-598105" podCIDRs=["10.244.0.0/24"]
	I1025 10:25:12.746371       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1025 10:25:12.759703       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:25:12.760752       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 10:25:12.760781       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 10:25:12.760806       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 10:25:12.760868       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 10:25:12.761456       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 10:25:12.763191       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 10:25:12.764264       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 10:25:12.765514       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 10:25:12.765564       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 10:25:12.766841       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:25:12.766856       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 10:25:12.766863       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 10:25:12.772235       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 10:25:57.718506       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a687e0761f69eb04b9cc7dc6eac7e0f7b9d90ee67f58072fdc0880f4eb52fd5b] <==
	I1025 10:25:15.577350       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:25:15.657043       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:25:15.757252       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:25:15.757373       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1025 10:25:15.757535       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:25:15.776880       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:25:15.776935       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:25:15.780922       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:25:15.781262       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:25:15.781342       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:25:15.784595       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:25:15.784689       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:25:15.785107       1 config.go:200] "Starting service config controller"
	I1025 10:25:15.785159       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:25:15.785487       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:25:15.785550       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:25:15.785997       1 config.go:309] "Starting node config controller"
	I1025 10:25:15.786060       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:25:15.786090       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:25:15.885309       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 10:25:15.885316       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 10:25:15.885636       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [e19bbf1f4eed5c911d637106fa9f035c92b7e0210f15222f9a9123d09622c90b] <==
	I1025 10:26:10.473355       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:26:11.308061       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:26:12.923121       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:26:12.923264       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1025 10:26:12.923400       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:26:12.993612       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:26:12.993723       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:26:13.010193       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:26:13.010469       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:26:13.010493       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:26:13.013160       1 config.go:200] "Starting service config controller"
	I1025 10:26:13.013183       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:26:13.013199       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:26:13.013204       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:26:13.013214       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:26:13.013232       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:26:13.013514       1 config.go:309] "Starting node config controller"
	I1025 10:26:13.013524       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:26:13.113879       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:26:13.113911       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 10:26:13.113927       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 10:26:13.113937       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7a7371c136a2c320fe9995dc5d7d2acffc0c01af23cb2964be84fe31334bb21f] <==
	I1025 10:26:10.156263       1 serving.go:386] Generated self-signed cert in-memory
	W1025 10:26:12.659526       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 10:26:12.659634       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 10:26:12.659668       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 10:26:12.659696       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 10:26:12.787755       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 10:26:12.787793       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:26:12.793347       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:26:12.793393       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:26:12.794214       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 10:26:12.794294       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 10:26:12.895994       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [7cfc933f8e7401faf3db3a012b28cf8021fdfdc23609781c8432a46243927082] <==
	E1025 10:25:06.490804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 10:25:06.490854       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 10:25:06.495240       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 10:25:06.495390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 10:25:06.496315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 10:25:06.498870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 10:25:06.499028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 10:25:06.499134       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 10:25:06.499287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 10:25:06.499317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 10:25:06.499421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 10:25:06.499495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 10:25:06.499569       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 10:25:06.499640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 10:25:06.499726       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 10:25:06.499869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 10:25:06.499441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 10:25:06.500331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1025 10:25:07.886587       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:26:00.138565       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1025 10:26:00.138621       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1025 10:26:00.138648       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1025 10:26:00.138684       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:26:00.139319       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1025 10:26:00.139346       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 25 10:26:07 pause-598105 kubelet[1297]: E1025 10:26:07.645468    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-598105\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d0b870271552b247f514ca996eff9377" pod="kube-system/kube-apiserver-pause-598105"
	Oct 25 10:26:07 pause-598105 kubelet[1297]: E1025 10:26:07.645607    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-598105\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3b472855df79fc609c4d241ac0d24faf" pod="kube-system/kube-scheduler-pause-598105"
	Oct 25 10:26:07 pause-598105 kubelet[1297]: E1025 10:26:07.645742    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-x2zhm\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="183c57f8-d19b-4e10-b018-d0518418dc4e" pod="kube-system/kindnet-x2zhm"
	Oct 25 10:26:07 pause-598105 kubelet[1297]: I1025 10:26:07.649858    1297 scope.go:117] "RemoveContainer" containerID="7cfc933f8e7401faf3db3a012b28cf8021fdfdc23609781c8432a46243927082"
	Oct 25 10:26:07 pause-598105 kubelet[1297]: E1025 10:26:07.650342    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-598105\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3b472855df79fc609c4d241ac0d24faf" pod="kube-system/kube-scheduler-pause-598105"
	Oct 25 10:26:07 pause-598105 kubelet[1297]: E1025 10:26:07.650522    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-x2zhm\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="183c57f8-d19b-4e10-b018-d0518418dc4e" pod="kube-system/kindnet-x2zhm"
	Oct 25 10:26:07 pause-598105 kubelet[1297]: E1025 10:26:07.650672    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-mwxxc\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="5aef0e38-29c2-4dbc-b75d-96b9454113b4" pod="kube-system/coredns-66bc5c9577-mwxxc"
	Oct 25 10:26:07 pause-598105 kubelet[1297]: E1025 10:26:07.650831    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-598105\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3e39b84e1439a35b8cc4ca27447f425f" pod="kube-system/etcd-pause-598105"
	Oct 25 10:26:07 pause-598105 kubelet[1297]: E1025 10:26:07.650982    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-598105\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="48af73e8293ba75a878f8d53435bf781" pod="kube-system/kube-controller-manager-pause-598105"
	Oct 25 10:26:07 pause-598105 kubelet[1297]: E1025 10:26:07.651130    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-598105\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d0b870271552b247f514ca996eff9377" pod="kube-system/kube-apiserver-pause-598105"
	Oct 25 10:26:07 pause-598105 kubelet[1297]: I1025 10:26:07.659069    1297 scope.go:117] "RemoveContainer" containerID="a687e0761f69eb04b9cc7dc6eac7e0f7b9d90ee67f58072fdc0880f4eb52fd5b"
	Oct 25 10:26:07 pause-598105 kubelet[1297]: E1025 10:26:07.659568    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-598105\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d0b870271552b247f514ca996eff9377" pod="kube-system/kube-apiserver-pause-598105"
	Oct 25 10:26:07 pause-598105 kubelet[1297]: E1025 10:26:07.659755    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-598105\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3b472855df79fc609c4d241ac0d24faf" pod="kube-system/kube-scheduler-pause-598105"
	Oct 25 10:26:07 pause-598105 kubelet[1297]: E1025 10:26:07.659921    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gg7cn\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="8afd0eec-98cc-4d94-ac83-e6734161aea0" pod="kube-system/kube-proxy-gg7cn"
	Oct 25 10:26:07 pause-598105 kubelet[1297]: E1025 10:26:07.660078    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-x2zhm\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="183c57f8-d19b-4e10-b018-d0518418dc4e" pod="kube-system/kindnet-x2zhm"
	Oct 25 10:26:07 pause-598105 kubelet[1297]: E1025 10:26:07.660244    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-mwxxc\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="5aef0e38-29c2-4dbc-b75d-96b9454113b4" pod="kube-system/coredns-66bc5c9577-mwxxc"
	Oct 25 10:26:07 pause-598105 kubelet[1297]: E1025 10:26:07.660403    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-598105\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="3e39b84e1439a35b8cc4ca27447f425f" pod="kube-system/etcd-pause-598105"
	Oct 25 10:26:07 pause-598105 kubelet[1297]: E1025 10:26:07.660558    1297 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-598105\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="48af73e8293ba75a878f8d53435bf781" pod="kube-system/kube-controller-manager-pause-598105"
	Oct 25 10:26:12 pause-598105 kubelet[1297]: E1025 10:26:12.675771    1297 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:pause-598105\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-598105' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Oct 25 10:26:12 pause-598105 kubelet[1297]: E1025 10:26:12.677576    1297 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-598105\" is forbidden: User \"system:node:pause-598105\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-598105' and this object" podUID="d0b870271552b247f514ca996eff9377" pod="kube-system/kube-apiserver-pause-598105"
	Oct 25 10:26:12 pause-598105 kubelet[1297]: E1025 10:26:12.725893    1297 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-598105\" is forbidden: User \"system:node:pause-598105\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-598105' and this object" podUID="3b472855df79fc609c4d241ac0d24faf" pod="kube-system/kube-scheduler-pause-598105"
	Oct 25 10:26:12 pause-598105 kubelet[1297]: E1025 10:26:12.762172    1297 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-gg7cn\" is forbidden: User \"system:node:pause-598105\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-598105' and this object" podUID="8afd0eec-98cc-4d94-ac83-e6734161aea0" pod="kube-system/kube-proxy-gg7cn"
	Oct 25 10:26:26 pause-598105 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 10:26:26 pause-598105 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 10:26:26 pause-598105 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-598105 -n pause-598105
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-598105 -n pause-598105: exit status 2 (453.305636ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-598105 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (7.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-610853 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-610853 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (274.538674ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:30:07Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-610853 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-610853 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-610853 describe deploy/metrics-server -n kube-system: exit status 1 (82.652871ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-610853 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-610853
helpers_test.go:243: (dbg) docker inspect old-k8s-version-610853:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d9ac8e10f5b1ea2965bab805d608fa83def6ed75bd1273d0c136ee442b0b45b2",
	        "Created": "2025-10-25T10:28:59.57788081Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 471573,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:28:59.653455974Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/d9ac8e10f5b1ea2965bab805d608fa83def6ed75bd1273d0c136ee442b0b45b2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d9ac8e10f5b1ea2965bab805d608fa83def6ed75bd1273d0c136ee442b0b45b2/hostname",
	        "HostsPath": "/var/lib/docker/containers/d9ac8e10f5b1ea2965bab805d608fa83def6ed75bd1273d0c136ee442b0b45b2/hosts",
	        "LogPath": "/var/lib/docker/containers/d9ac8e10f5b1ea2965bab805d608fa83def6ed75bd1273d0c136ee442b0b45b2/d9ac8e10f5b1ea2965bab805d608fa83def6ed75bd1273d0c136ee442b0b45b2-json.log",
	        "Name": "/old-k8s-version-610853",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-610853:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-610853",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d9ac8e10f5b1ea2965bab805d608fa83def6ed75bd1273d0c136ee442b0b45b2",
	                "LowerDir": "/var/lib/docker/overlay2/f26c3023df3151bbd59006b3509ec301ab9c593728a2fdf00fb4b7492c86c22e-init/diff:/var/lib/docker/overlay2/56244b4f60319ba406f5ebe406da3f45bd5764c167111669ae2d84794cf94518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f26c3023df3151bbd59006b3509ec301ab9c593728a2fdf00fb4b7492c86c22e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f26c3023df3151bbd59006b3509ec301ab9c593728a2fdf00fb4b7492c86c22e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f26c3023df3151bbd59006b3509ec301ab9c593728a2fdf00fb4b7492c86c22e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-610853",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-610853/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-610853",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-610853",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-610853",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "162b4b61b2e7d5805ba95f95b37b856f18651d55a57f7aa922ad4b0cf11c25bb",
	            "SandboxKey": "/var/run/docker/netns/162b4b61b2e7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33422"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-610853": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:bd:9f:ca:b8:da",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "81225534a6ecbdb108a21a8d61134e13e2b296f3c48ec26db1c8d60aa1908e7c",
	                    "EndpointID": "32af8fb194bd5b47d0d82ba5b8cb0dfaaa544a8c046b36983125f5b94451a0e5",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-610853",
	                        "d9ac8e10f5b1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-610853 -n old-k8s-version-610853
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-610853 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-610853 logs -n 25: (1.192260397s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-821614 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-821614             │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-821614 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-821614             │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-821614 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-821614             │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-821614 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-821614             │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-821614 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-821614             │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-821614 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-821614             │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-821614 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-821614             │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-821614 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-821614             │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-821614 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-821614             │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-821614 sudo containerd config dump                                                                                                                                                                                                  │ cilium-821614             │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-821614 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-821614             │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-821614 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-821614             │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-821614 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-821614             │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-821614 sudo crio config                                                                                                                                                                                                             │ cilium-821614             │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │                     │
	│ delete  │ -p cilium-821614                                                                                                                                                                                                                              │ cilium-821614             │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │ 25 Oct 25 10:27 UTC │
	│ start   │ -p force-systemd-env-068963 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-068963  │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │ 25 Oct 25 10:28 UTC │
	│ delete  │ -p kubernetes-upgrade-845331                                                                                                                                                                                                                  │ kubernetes-upgrade-845331 │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │ 25 Oct 25 10:28 UTC │
	│ start   │ -p cert-expiration-313068 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-313068    │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:28 UTC │
	│ delete  │ -p force-systemd-env-068963                                                                                                                                                                                                                   │ force-systemd-env-068963  │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:28 UTC │
	│ start   │ -p cert-options-506318 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-506318       │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:28 UTC │
	│ ssh     │ cert-options-506318 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-506318       │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:28 UTC │
	│ ssh     │ -p cert-options-506318 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-506318       │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:28 UTC │
	│ delete  │ -p cert-options-506318                                                                                                                                                                                                                        │ cert-options-506318       │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:28 UTC │
	│ start   │ -p old-k8s-version-610853 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-610853    │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:29 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-610853 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-610853    │ jenkins │ v1.37.0 │ 25 Oct 25 10:30 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:28:52
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:28:52.551358  471179 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:28:52.551551  471179 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:28:52.551577  471179 out.go:374] Setting ErrFile to fd 2...
	I1025 10:28:52.551597  471179 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:28:52.551887  471179 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 10:28:52.552359  471179 out.go:368] Setting JSON to false
	I1025 10:28:52.553327  471179 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7882,"bootTime":1761380250,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1025 10:28:52.553425  471179 start.go:141] virtualization:  
	I1025 10:28:52.557271  471179 out.go:179] * [old-k8s-version-610853] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:28:52.562293  471179 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 10:28:52.562329  471179 notify.go:220] Checking for updates...
	I1025 10:28:52.565886  471179 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:28:52.569435  471179 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:28:52.572894  471179 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-292167/.minikube
	I1025 10:28:52.576249  471179 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:28:52.579576  471179 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:28:52.583447  471179 config.go:182] Loaded profile config "cert-expiration-313068": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:28:52.583616  471179 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:28:52.618186  471179 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:28:52.618414  471179 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:28:52.679458  471179 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:28:52.669931377 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:28:52.679565  471179 docker.go:318] overlay module found
	I1025 10:28:52.682855  471179 out.go:179] * Using the docker driver based on user configuration
	I1025 10:28:52.685885  471179 start.go:305] selected driver: docker
	I1025 10:28:52.685904  471179 start.go:925] validating driver "docker" against <nil>
	I1025 10:28:52.685918  471179 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:28:52.686634  471179 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:28:52.744112  471179 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:28:52.735494147 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:28:52.744288  471179 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 10:28:52.744519  471179 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:28:52.747700  471179 out.go:179] * Using Docker driver with root privileges
	I1025 10:28:52.750690  471179 cni.go:84] Creating CNI manager for ""
	I1025 10:28:52.750759  471179 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:28:52.750772  471179 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 10:28:52.750847  471179 start.go:349] cluster config:
	{Name:old-k8s-version-610853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-610853 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:28:52.754674  471179 out.go:179] * Starting "old-k8s-version-610853" primary control-plane node in "old-k8s-version-610853" cluster
	I1025 10:28:52.757681  471179 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:28:52.760679  471179 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:28:52.763630  471179 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 10:28:52.763690  471179 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1025 10:28:52.763704  471179 cache.go:58] Caching tarball of preloaded images
	I1025 10:28:52.763730  471179 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:28:52.763810  471179 preload.go:233] Found /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:28:52.763828  471179 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1025 10:28:52.763942  471179 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/config.json ...
	I1025 10:28:52.763967  471179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/config.json: {Name:mk7c713764695267770943718767636828f30be6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:28:52.782690  471179 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:28:52.782711  471179 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:28:52.782734  471179 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:28:52.782757  471179 start.go:360] acquireMachinesLock for old-k8s-version-610853: {Name:mk4cf5d4a6d8178880fb3a10acdef15766144ca0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:28:52.782863  471179 start.go:364] duration metric: took 86.015µs to acquireMachinesLock for "old-k8s-version-610853"
	I1025 10:28:52.782894  471179 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-610853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-610853 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:28:52.782963  471179 start.go:125] createHost starting for "" (driver="docker")
	I1025 10:28:52.786564  471179 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 10:28:52.786787  471179 start.go:159] libmachine.API.Create for "old-k8s-version-610853" (driver="docker")
	I1025 10:28:52.786839  471179 client.go:168] LocalClient.Create starting
	I1025 10:28:52.786917  471179 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem
	I1025 10:28:52.786953  471179 main.go:141] libmachine: Decoding PEM data...
	I1025 10:28:52.786976  471179 main.go:141] libmachine: Parsing certificate...
	I1025 10:28:52.787030  471179 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem
	I1025 10:28:52.787051  471179 main.go:141] libmachine: Decoding PEM data...
	I1025 10:28:52.787066  471179 main.go:141] libmachine: Parsing certificate...
	I1025 10:28:52.787462  471179 cli_runner.go:164] Run: docker network inspect old-k8s-version-610853 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 10:28:52.803354  471179 cli_runner.go:211] docker network inspect old-k8s-version-610853 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 10:28:52.803429  471179 network_create.go:284] running [docker network inspect old-k8s-version-610853] to gather additional debugging logs...
	I1025 10:28:52.803450  471179 cli_runner.go:164] Run: docker network inspect old-k8s-version-610853
	W1025 10:28:52.819774  471179 cli_runner.go:211] docker network inspect old-k8s-version-610853 returned with exit code 1
	I1025 10:28:52.819815  471179 network_create.go:287] error running [docker network inspect old-k8s-version-610853]: docker network inspect old-k8s-version-610853: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-610853 not found
	I1025 10:28:52.819837  471179 network_create.go:289] output of [docker network inspect old-k8s-version-610853]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-610853 not found
	
	** /stderr **
	I1025 10:28:52.819943  471179 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:28:52.836616  471179 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-101b69e1e09b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d2:49:dd:18:ab:21} reservation:<nil>}
	I1025 10:28:52.836899  471179 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fee160f0176f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:62:3b:42:f8:f5:b2} reservation:<nil>}
	I1025 10:28:52.837249  471179 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5368f13f34ad IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2e:30:83:3d:80:30} reservation:<nil>}
	I1025 10:28:52.837525  471179 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-7e12972baa5a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:26:37:7e:2b:2e:b3} reservation:<nil>}
	I1025 10:28:52.837980  471179 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a11210}
	I1025 10:28:52.838004  471179 network_create.go:124] attempt to create docker network old-k8s-version-610853 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1025 10:28:52.838062  471179 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-610853 old-k8s-version-610853
	I1025 10:28:52.904340  471179 network_create.go:108] docker network old-k8s-version-610853 192.168.85.0/24 created
	I1025 10:28:52.904374  471179 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-610853" container
	I1025 10:28:52.904445  471179 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 10:28:52.922122  471179 cli_runner.go:164] Run: docker volume create old-k8s-version-610853 --label name.minikube.sigs.k8s.io=old-k8s-version-610853 --label created_by.minikube.sigs.k8s.io=true
	I1025 10:28:52.938958  471179 oci.go:103] Successfully created a docker volume old-k8s-version-610853
	I1025 10:28:52.939049  471179 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-610853-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-610853 --entrypoint /usr/bin/test -v old-k8s-version-610853:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 10:28:53.507787  471179 oci.go:107] Successfully prepared a docker volume old-k8s-version-610853
	I1025 10:28:53.507839  471179 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 10:28:53.507859  471179 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 10:28:53.507929  471179 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-610853:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 10:28:59.499270  471179 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-610853:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.991300396s)
	I1025 10:28:59.499303  471179 kic.go:203] duration metric: took 5.991440409s to extract preloaded images to volume ...
	W1025 10:28:59.499468  471179 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1025 10:28:59.499586  471179 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 10:28:59.563068  471179 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-610853 --name old-k8s-version-610853 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-610853 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-610853 --network old-k8s-version-610853 --ip 192.168.85.2 --volume old-k8s-version-610853:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 10:28:59.872940  471179 cli_runner.go:164] Run: docker container inspect old-k8s-version-610853 --format={{.State.Running}}
	I1025 10:28:59.894714  471179 cli_runner.go:164] Run: docker container inspect old-k8s-version-610853 --format={{.State.Status}}
	I1025 10:28:59.920523  471179 cli_runner.go:164] Run: docker exec old-k8s-version-610853 stat /var/lib/dpkg/alternatives/iptables
	I1025 10:28:59.976961  471179 oci.go:144] the created container "old-k8s-version-610853" has a running status.
	I1025 10:28:59.977004  471179 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21794-292167/.minikube/machines/old-k8s-version-610853/id_rsa...
	I1025 10:29:00.513381  471179 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21794-292167/.minikube/machines/old-k8s-version-610853/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 10:29:00.559329  471179 cli_runner.go:164] Run: docker container inspect old-k8s-version-610853 --format={{.State.Status}}
	I1025 10:29:00.584840  471179 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 10:29:00.584859  471179 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-610853 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 10:29:00.639132  471179 cli_runner.go:164] Run: docker container inspect old-k8s-version-610853 --format={{.State.Status}}
	I1025 10:29:00.655652  471179 machine.go:93] provisionDockerMachine start ...
	I1025 10:29:00.655745  471179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610853
	I1025 10:29:00.676595  471179 main.go:141] libmachine: Using SSH client type: native
	I1025 10:29:00.676926  471179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33422 <nil> <nil>}
	I1025 10:29:00.676943  471179 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:29:00.677545  471179 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35382->127.0.0.1:33422: read: connection reset by peer
	I1025 10:29:03.826666  471179 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-610853
	
	I1025 10:29:03.826688  471179 ubuntu.go:182] provisioning hostname "old-k8s-version-610853"
	I1025 10:29:03.826756  471179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610853
	I1025 10:29:03.844539  471179 main.go:141] libmachine: Using SSH client type: native
	I1025 10:29:03.844851  471179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33422 <nil> <nil>}
	I1025 10:29:03.844866  471179 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-610853 && echo "old-k8s-version-610853" | sudo tee /etc/hostname
	I1025 10:29:04.011347  471179 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-610853
	
	I1025 10:29:04.011499  471179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610853
	I1025 10:29:04.030860  471179 main.go:141] libmachine: Using SSH client type: native
	I1025 10:29:04.031216  471179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33422 <nil> <nil>}
	I1025 10:29:04.031237  471179 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-610853' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-610853/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-610853' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:29:04.187463  471179 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:29:04.187492  471179 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-292167/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-292167/.minikube}
	I1025 10:29:04.187523  471179 ubuntu.go:190] setting up certificates
	I1025 10:29:04.187533  471179 provision.go:84] configureAuth start
	I1025 10:29:04.187611  471179 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-610853
	I1025 10:29:04.205798  471179 provision.go:143] copyHostCerts
	I1025 10:29:04.205873  471179 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem, removing ...
	I1025 10:29:04.205883  471179 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem
	I1025 10:29:04.205967  471179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem (1082 bytes)
	I1025 10:29:04.206069  471179 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem, removing ...
	I1025 10:29:04.206078  471179 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem
	I1025 10:29:04.206106  471179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem (1123 bytes)
	I1025 10:29:04.206156  471179 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem, removing ...
	I1025 10:29:04.206166  471179 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem
	I1025 10:29:04.206191  471179 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem (1675 bytes)
	I1025 10:29:04.206244  471179 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-610853 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-610853]
	I1025 10:29:05.081819  471179 provision.go:177] copyRemoteCerts
	I1025 10:29:05.081900  471179 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:29:05.081944  471179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610853
	I1025 10:29:05.100794  471179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33422 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/old-k8s-version-610853/id_rsa Username:docker}
	I1025 10:29:05.206945  471179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 10:29:05.227824  471179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1025 10:29:05.246849  471179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 10:29:05.264666  471179 provision.go:87] duration metric: took 1.077105355s to configureAuth
	I1025 10:29:05.264695  471179 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:29:05.264933  471179 config.go:182] Loaded profile config "old-k8s-version-610853": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 10:29:05.265083  471179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610853
	I1025 10:29:05.282384  471179 main.go:141] libmachine: Using SSH client type: native
	I1025 10:29:05.282689  471179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33422 <nil> <nil>}
	I1025 10:29:05.282712  471179 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:29:05.551982  471179 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:29:05.552015  471179 machine.go:96] duration metric: took 4.896335077s to provisionDockerMachine
	I1025 10:29:05.552024  471179 client.go:171] duration metric: took 12.765174819s to LocalClient.Create
	I1025 10:29:05.552037  471179 start.go:167] duration metric: took 12.765251865s to libmachine.API.Create "old-k8s-version-610853"
	I1025 10:29:05.552044  471179 start.go:293] postStartSetup for "old-k8s-version-610853" (driver="docker")
	I1025 10:29:05.552055  471179 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:29:05.552129  471179 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:29:05.552172  471179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610853
	I1025 10:29:05.571419  471179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33422 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/old-k8s-version-610853/id_rsa Username:docker}
	I1025 10:29:05.679195  471179 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:29:05.682348  471179 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:29:05.682375  471179 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:29:05.682387  471179 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/addons for local assets ...
	I1025 10:29:05.682443  471179 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/files for local assets ...
	I1025 10:29:05.682536  471179 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem -> 2940172.pem in /etc/ssl/certs
	I1025 10:29:05.682641  471179 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:29:05.689798  471179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 10:29:05.707407  471179 start.go:296] duration metric: took 155.347918ms for postStartSetup
	I1025 10:29:05.707766  471179 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-610853
	I1025 10:29:05.729760  471179 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/config.json ...
	I1025 10:29:05.730030  471179 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:29:05.730083  471179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610853
	I1025 10:29:05.747371  471179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33422 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/old-k8s-version-610853/id_rsa Username:docker}
	I1025 10:29:05.848212  471179 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:29:05.852665  471179 start.go:128] duration metric: took 13.069684559s to createHost
	I1025 10:29:05.852686  471179 start.go:83] releasing machines lock for "old-k8s-version-610853", held for 13.069809131s
	I1025 10:29:05.852756  471179 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-610853
	I1025 10:29:05.869285  471179 ssh_runner.go:195] Run: cat /version.json
	I1025 10:29:05.869341  471179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610853
	I1025 10:29:05.869604  471179 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:29:05.869657  471179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610853
	I1025 10:29:05.889353  471179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33422 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/old-k8s-version-610853/id_rsa Username:docker}
	I1025 10:29:05.895475  471179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33422 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/old-k8s-version-610853/id_rsa Username:docker}
	I1025 10:29:06.086223  471179 ssh_runner.go:195] Run: systemctl --version
	I1025 10:29:06.092726  471179 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:29:06.130206  471179 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:29:06.134785  471179 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:29:06.134880  471179 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:29:06.162986  471179 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1025 10:29:06.163011  471179 start.go:495] detecting cgroup driver to use...
	I1025 10:29:06.163072  471179 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:29:06.163165  471179 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:29:06.180787  471179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:29:06.194701  471179 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:29:06.194807  471179 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:29:06.213342  471179 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:29:06.237005  471179 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:29:06.352915  471179 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:29:06.483244  471179 docker.go:234] disabling docker service ...
	I1025 10:29:06.483360  471179 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:29:06.514066  471179 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:29:06.530159  471179 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:29:06.652549  471179 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:29:06.775030  471179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:29:06.788572  471179 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:29:06.802645  471179 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1025 10:29:06.802732  471179 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:29:06.811650  471179 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:29:06.811759  471179 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:29:06.820712  471179 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:29:06.829766  471179 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:29:06.838962  471179 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:29:06.847031  471179 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:29:06.855924  471179 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:29:06.869529  471179 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:29:06.879531  471179 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:29:06.886999  471179 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:29:06.894226  471179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:29:07.006950  471179 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:29:07.141131  471179 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:29:07.141203  471179 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:29:07.144948  471179 start.go:563] Will wait 60s for crictl version
	I1025 10:29:07.145018  471179 ssh_runner.go:195] Run: which crictl
	I1025 10:29:07.148567  471179 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:29:07.174922  471179 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:29:07.175007  471179 ssh_runner.go:195] Run: crio --version
	I1025 10:29:07.203258  471179 ssh_runner.go:195] Run: crio --version
	I1025 10:29:07.235005  471179 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1025 10:29:07.237875  471179 cli_runner.go:164] Run: docker network inspect old-k8s-version-610853 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:29:07.255539  471179 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1025 10:29:07.259323  471179 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:29:07.270325  471179 kubeadm.go:883] updating cluster {Name:old-k8s-version-610853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-610853 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:29:07.270440  471179 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 10:29:07.270503  471179 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:29:07.307907  471179 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:29:07.307931  471179 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:29:07.307988  471179 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:29:07.334765  471179 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:29:07.334812  471179 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:29:07.334820  471179 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1025 10:29:07.334913  471179 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-610853 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-610853 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:29:07.335010  471179 ssh_runner.go:195] Run: crio config
	I1025 10:29:07.409525  471179 cni.go:84] Creating CNI manager for ""
	I1025 10:29:07.409548  471179 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:29:07.409571  471179 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:29:07.409617  471179 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-610853 NodeName:old-k8s-version-610853 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:29:07.409766  471179 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-610853"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:29:07.409844  471179 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1025 10:29:07.417839  471179 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:29:07.417908  471179 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:29:07.425110  471179 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1025 10:29:07.438109  471179 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:29:07.451270  471179 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1025 10:29:07.463711  471179 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:29:07.467171  471179 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:29:07.478456  471179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:29:07.608036  471179 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:29:07.626352  471179 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853 for IP: 192.168.85.2
	I1025 10:29:07.626385  471179 certs.go:195] generating shared ca certs ...
	I1025 10:29:07.626403  471179 certs.go:227] acquiring lock for ca certs: {Name:mk9438893ca511bbb3aa6154ee7b6f94b409696d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:29:07.626561  471179 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key
	I1025 10:29:07.626641  471179 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key
	I1025 10:29:07.626654  471179 certs.go:257] generating profile certs ...
	I1025 10:29:07.626741  471179 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/client.key
	I1025 10:29:07.626759  471179 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/client.crt with IP's: []
	I1025 10:29:07.768037  471179 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/client.crt ...
	I1025 10:29:07.768073  471179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/client.crt: {Name:mk0fe667161f21822d487a3b304f17d1aab4504b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:29:07.768308  471179 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/client.key ...
	I1025 10:29:07.768325  471179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/client.key: {Name:mkb3471e70dd18185d1efb86dd57f846cc680e29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:29:07.768431  471179 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/apiserver.key.132f89be
	I1025 10:29:07.768450  471179 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/apiserver.crt.132f89be with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1025 10:29:08.550853  471179 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/apiserver.crt.132f89be ...
	I1025 10:29:08.550883  471179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/apiserver.crt.132f89be: {Name:mk16425f1901c6413c2c2cb1d87907ba95880eb4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:29:08.551057  471179 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/apiserver.key.132f89be ...
	I1025 10:29:08.551074  471179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/apiserver.key.132f89be: {Name:mk2e873e0d0a1c31655c04df27a58cde6f0d9e2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:29:08.551181  471179 certs.go:382] copying /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/apiserver.crt.132f89be -> /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/apiserver.crt
	I1025 10:29:08.551268  471179 certs.go:386] copying /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/apiserver.key.132f89be -> /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/apiserver.key
	I1025 10:29:08.551338  471179 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/proxy-client.key
	I1025 10:29:08.551356  471179 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/proxy-client.crt with IP's: []
	I1025 10:29:09.363537  471179 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/proxy-client.crt ...
	I1025 10:29:09.363611  471179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/proxy-client.crt: {Name:mk02f917f60e4c3e3e7b29255a5340c38e8b7fbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:29:09.363817  471179 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/proxy-client.key ...
	I1025 10:29:09.363858  471179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/proxy-client.key: {Name:mk9988bf3c4205bdcffeffbc90bb3915c777c664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:29:09.364095  471179 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem (1338 bytes)
	W1025 10:29:09.364181  471179 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017_empty.pem, impossibly tiny 0 bytes
	I1025 10:29:09.364219  471179 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 10:29:09.364267  471179 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem (1082 bytes)
	I1025 10:29:09.364321  471179 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:29:09.364370  471179 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem (1675 bytes)
	I1025 10:29:09.364448  471179 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 10:29:09.365043  471179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:29:09.384200  471179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:29:09.404185  471179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:29:09.424973  471179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:29:09.444767  471179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1025 10:29:09.462673  471179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:29:09.480157  471179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:29:09.506249  471179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:29:09.525199  471179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:29:09.543314  471179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem --> /usr/share/ca-certificates/294017.pem (1338 bytes)
	I1025 10:29:09.562464  471179 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /usr/share/ca-certificates/2940172.pem (1708 bytes)
	I1025 10:29:09.581890  471179 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:29:09.594910  471179 ssh_runner.go:195] Run: openssl version
	I1025 10:29:09.601196  471179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294017.pem && ln -fs /usr/share/ca-certificates/294017.pem /etc/ssl/certs/294017.pem"
	I1025 10:29:09.609722  471179 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294017.pem
	I1025 10:29:09.613529  471179 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:39 /usr/share/ca-certificates/294017.pem
	I1025 10:29:09.613594  471179 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294017.pem
	I1025 10:29:09.655268  471179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294017.pem /etc/ssl/certs/51391683.0"
	I1025 10:29:09.664208  471179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940172.pem && ln -fs /usr/share/ca-certificates/2940172.pem /etc/ssl/certs/2940172.pem"
	I1025 10:29:09.674234  471179 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940172.pem
	I1025 10:29:09.677967  471179 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:39 /usr/share/ca-certificates/2940172.pem
	I1025 10:29:09.678027  471179 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940172.pem
	I1025 10:29:09.721949  471179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940172.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:29:09.730907  471179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:29:09.739387  471179 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:29:09.743563  471179 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:29:09.743641  471179 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:29:09.792352  471179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:29:09.802015  471179 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:29:09.805627  471179 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 10:29:09.805707  471179 kubeadm.go:400] StartCluster: {Name:old-k8s-version-610853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-610853 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:29:09.805858  471179 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:29:09.805928  471179 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:29:09.839586  471179 cri.go:89] found id: ""
	I1025 10:29:09.839669  471179 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:29:09.848159  471179 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 10:29:09.856220  471179 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 10:29:09.856299  471179 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 10:29:09.864641  471179 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 10:29:09.864664  471179 kubeadm.go:157] found existing configuration files:
	
	I1025 10:29:09.864754  471179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 10:29:09.872588  471179 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 10:29:09.872682  471179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 10:29:09.880292  471179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 10:29:09.888183  471179 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 10:29:09.888269  471179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 10:29:09.896048  471179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 10:29:09.903915  471179 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 10:29:09.903990  471179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 10:29:09.911244  471179 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 10:29:09.918836  471179 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 10:29:09.918939  471179 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 10:29:09.926700  471179 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 10:29:10.032300  471179 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1025 10:29:10.133189  471179 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 10:29:27.135353  471179 kubeadm.go:318] [init] Using Kubernetes version: v1.28.0
	I1025 10:29:27.135410  471179 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 10:29:27.135502  471179 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 10:29:27.135559  471179 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1025 10:29:27.135594  471179 kubeadm.go:318] OS: Linux
	I1025 10:29:27.135642  471179 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 10:29:27.135692  471179 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1025 10:29:27.135741  471179 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 10:29:27.135796  471179 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 10:29:27.135846  471179 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 10:29:27.135896  471179 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 10:29:27.135943  471179 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 10:29:27.135993  471179 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 10:29:27.136041  471179 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1025 10:29:27.136125  471179 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 10:29:27.136228  471179 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 10:29:27.136325  471179 kubeadm.go:318] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1025 10:29:27.136389  471179 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 10:29:27.139325  471179 out.go:252]   - Generating certificates and keys ...
	I1025 10:29:27.139414  471179 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 10:29:27.139478  471179 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 10:29:27.139545  471179 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 10:29:27.139602  471179 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 10:29:27.139662  471179 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 10:29:27.139712  471179 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 10:29:27.139766  471179 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 10:29:27.139891  471179 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-610853] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1025 10:29:27.139943  471179 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 10:29:27.140075  471179 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-610853] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1025 10:29:27.140149  471179 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 10:29:27.140213  471179 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 10:29:27.140268  471179 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 10:29:27.140324  471179 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 10:29:27.140375  471179 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 10:29:27.140427  471179 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 10:29:27.140493  471179 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 10:29:27.140547  471179 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 10:29:27.140628  471179 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 10:29:27.140694  471179 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1025 10:29:27.143797  471179 out.go:252]   - Booting up control plane ...
	I1025 10:29:27.143914  471179 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 10:29:27.144006  471179 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 10:29:27.144083  471179 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 10:29:27.144199  471179 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 10:29:27.144314  471179 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 10:29:27.144383  471179 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 10:29:27.144571  471179 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1025 10:29:27.144662  471179 kubeadm.go:318] [apiclient] All control plane components are healthy after 8.503293 seconds
	I1025 10:29:27.144776  471179 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 10:29:27.144919  471179 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 10:29:27.144985  471179 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 10:29:27.145213  471179 kubeadm.go:318] [mark-control-plane] Marking the node old-k8s-version-610853 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 10:29:27.145282  471179 kubeadm.go:318] [bootstrap-token] Using token: kpx68t.iozbw0ufye5paa65
	I1025 10:29:27.148166  471179 out.go:252]   - Configuring RBAC rules ...
	I1025 10:29:27.148286  471179 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 10:29:27.148370  471179 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 10:29:27.148509  471179 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 10:29:27.148636  471179 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 10:29:27.148749  471179 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 10:29:27.148860  471179 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 10:29:27.148974  471179 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 10:29:27.149017  471179 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 10:29:27.149062  471179 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 10:29:27.149066  471179 kubeadm.go:318] 
	I1025 10:29:27.149126  471179 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 10:29:27.149130  471179 kubeadm.go:318] 
	I1025 10:29:27.149206  471179 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 10:29:27.149214  471179 kubeadm.go:318] 
	I1025 10:29:27.149239  471179 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 10:29:27.149297  471179 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 10:29:27.149350  471179 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 10:29:27.149354  471179 kubeadm.go:318] 
	I1025 10:29:27.149411  471179 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 10:29:27.149416  471179 kubeadm.go:318] 
	I1025 10:29:27.149462  471179 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 10:29:27.149467  471179 kubeadm.go:318] 
	I1025 10:29:27.149518  471179 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 10:29:27.149592  471179 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 10:29:27.149660  471179 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 10:29:27.149663  471179 kubeadm.go:318] 
	I1025 10:29:27.149747  471179 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 10:29:27.149822  471179 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 10:29:27.149826  471179 kubeadm.go:318] 
	I1025 10:29:27.149909  471179 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token kpx68t.iozbw0ufye5paa65 \
	I1025 10:29:27.150011  471179 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3fbde57ac93e639221cdfe118779e4c6254e9915ac4b07aafbe95a9f86bce8d0 \
	I1025 10:29:27.150032  471179 kubeadm.go:318] 	--control-plane 
	I1025 10:29:27.150036  471179 kubeadm.go:318] 
	I1025 10:29:27.150130  471179 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 10:29:27.150135  471179 kubeadm.go:318] 
	I1025 10:29:27.150216  471179 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token kpx68t.iozbw0ufye5paa65 \
	I1025 10:29:27.150329  471179 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3fbde57ac93e639221cdfe118779e4c6254e9915ac4b07aafbe95a9f86bce8d0 
	I1025 10:29:27.150336  471179 cni.go:84] Creating CNI manager for ""
	I1025 10:29:27.150344  471179 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:29:27.153400  471179 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 10:29:27.156441  471179 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 10:29:27.161157  471179 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1025 10:29:27.161176  471179 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 10:29:27.191354  471179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 10:29:28.164675  471179 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 10:29:28.164832  471179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:29:28.164964  471179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-610853 minikube.k8s.io/updated_at=2025_10_25T10_29_28_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53 minikube.k8s.io/name=old-k8s-version-610853 minikube.k8s.io/primary=true
	I1025 10:29:28.354342  471179 ops.go:34] apiserver oom_adj: -16
	I1025 10:29:28.354448  471179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:29:28.855574  471179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:29:29.355065  471179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:29:29.855202  471179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:29:30.355108  471179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:29:30.854609  471179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:29:31.354935  471179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:29:31.855449  471179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:29:32.355122  471179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:29:32.855253  471179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:29:33.355063  471179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:29:33.855456  471179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:29:34.354532  471179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:29:34.855282  471179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:29:35.355180  471179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:29:35.854772  471179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:29:36.354862  471179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:29:36.854694  471179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:29:37.354628  471179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:29:37.855402  471179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:29:38.354550  471179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:29:38.854663  471179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:29:39.354766  471179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:29:39.854719  471179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:29:40.354560  471179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:29:40.538947  471179 kubeadm.go:1113] duration metric: took 12.374177949s to wait for elevateKubeSystemPrivileges
	I1025 10:29:40.538988  471179 kubeadm.go:402] duration metric: took 30.733285651s to StartCluster
	I1025 10:29:40.539007  471179 settings.go:142] acquiring lock: {Name:mkea67a33832f2ea491a4c745ccd836174c61655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:29:40.539099  471179 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:29:40.540290  471179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/kubeconfig: {Name:mk3dba620aff9c31a3afd43d9db4d9ff5be75367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:29:40.540549  471179 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:29:40.540674  471179 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 10:29:40.540935  471179 config.go:182] Loaded profile config "old-k8s-version-610853": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 10:29:40.540976  471179 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:29:40.541105  471179 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-610853"
	I1025 10:29:40.541144  471179 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-610853"
	I1025 10:29:40.541198  471179 host.go:66] Checking if "old-k8s-version-610853" exists ...
	I1025 10:29:40.541115  471179 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-610853"
	I1025 10:29:40.541614  471179 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-610853"
	I1025 10:29:40.541887  471179 cli_runner.go:164] Run: docker container inspect old-k8s-version-610853 --format={{.State.Status}}
	I1025 10:29:40.541969  471179 cli_runner.go:164] Run: docker container inspect old-k8s-version-610853 --format={{.State.Status}}
	I1025 10:29:40.544856  471179 out.go:179] * Verifying Kubernetes components...
	I1025 10:29:40.550893  471179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:29:40.601891  471179 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-610853"
	I1025 10:29:40.601939  471179 host.go:66] Checking if "old-k8s-version-610853" exists ...
	I1025 10:29:40.602377  471179 cli_runner.go:164] Run: docker container inspect old-k8s-version-610853 --format={{.State.Status}}
	I1025 10:29:40.606116  471179 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:29:40.611949  471179 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:29:40.611984  471179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:29:40.612080  471179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610853
	I1025 10:29:40.643319  471179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33422 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/old-k8s-version-610853/id_rsa Username:docker}
	I1025 10:29:40.649385  471179 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:29:40.649410  471179 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:29:40.649471  471179 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610853
	I1025 10:29:40.683445  471179 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33422 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/old-k8s-version-610853/id_rsa Username:docker}
	I1025 10:29:40.831847  471179 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 10:29:40.902601  471179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:29:40.927613  471179 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:29:40.984611  471179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:29:41.909591  471179 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.077699832s)
	I1025 10:29:41.909623  471179 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1025 10:29:41.910830  471179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.008200849s)
	I1025 10:29:41.911835  471179 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-610853" to be "Ready" ...
	I1025 10:29:42.409616  471179 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.424963108s)
	I1025 10:29:42.412960  471179 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1025 10:29:42.415824  471179 addons.go:514] duration metric: took 1.874856256s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1025 10:29:42.427782  471179 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-610853" context rescaled to 1 replicas
	W1025 10:29:43.914787  471179 node_ready.go:57] node "old-k8s-version-610853" has "Ready":"False" status (will retry)
	W1025 10:29:45.914934  471179 node_ready.go:57] node "old-k8s-version-610853" has "Ready":"False" status (will retry)
	W1025 10:29:48.414799  471179 node_ready.go:57] node "old-k8s-version-610853" has "Ready":"False" status (will retry)
	W1025 10:29:50.415620  471179 node_ready.go:57] node "old-k8s-version-610853" has "Ready":"False" status (will retry)
	W1025 10:29:52.915357  471179 node_ready.go:57] node "old-k8s-version-610853" has "Ready":"False" status (will retry)
	I1025 10:29:54.415668  471179 node_ready.go:49] node "old-k8s-version-610853" is "Ready"
	I1025 10:29:54.415701  471179 node_ready.go:38] duration metric: took 12.503835415s for node "old-k8s-version-610853" to be "Ready" ...
	I1025 10:29:54.415715  471179 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:29:54.415791  471179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:29:54.427452  471179 api_server.go:72] duration metric: took 13.886863903s to wait for apiserver process to appear ...
	I1025 10:29:54.427480  471179 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:29:54.427499  471179 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 10:29:54.437451  471179 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1025 10:29:54.438883  471179 api_server.go:141] control plane version: v1.28.0
	I1025 10:29:54.438911  471179 api_server.go:131] duration metric: took 11.424006ms to wait for apiserver health ...
	I1025 10:29:54.438921  471179 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:29:54.442684  471179 system_pods.go:59] 8 kube-system pods found
	I1025 10:29:54.442721  471179 system_pods.go:61] "coredns-5dd5756b68-mp4xx" [339b3875-9aea-4d9d-bd92-87082f232a5e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:29:54.442728  471179 system_pods.go:61] "etcd-old-k8s-version-610853" [dae3daf5-67f7-4923-a88e-d0c16e57bb45] Running
	I1025 10:29:54.442734  471179 system_pods.go:61] "kindnet-vgctp" [6092762b-d84b-4455-aac9-b17e1c0b90e6] Running
	I1025 10:29:54.442738  471179 system_pods.go:61] "kube-apiserver-old-k8s-version-610853" [8021fc38-9ddb-4c3f-b620-c0e814e3b933] Running
	I1025 10:29:54.442743  471179 system_pods.go:61] "kube-controller-manager-old-k8s-version-610853" [c59c26ce-37b0-4bc8-bb8d-1f8cebc32435] Running
	I1025 10:29:54.442747  471179 system_pods.go:61] "kube-proxy-pvxrq" [ea082c00-6806-45fc-96a0-de6cbe2b9afd] Running
	I1025 10:29:54.442752  471179 system_pods.go:61] "kube-scheduler-old-k8s-version-610853" [d566f4ab-65a2-4313-84a7-201466e94cc1] Running
	I1025 10:29:54.442758  471179 system_pods.go:61] "storage-provisioner" [7f2741b4-bcad-4266-9634-4b2aee05a1d7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:29:54.442772  471179 system_pods.go:74] duration metric: took 3.843568ms to wait for pod list to return data ...
	I1025 10:29:54.442794  471179 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:29:54.445223  471179 default_sa.go:45] found service account: "default"
	I1025 10:29:54.445247  471179 default_sa.go:55] duration metric: took 2.442927ms for default service account to be created ...
	I1025 10:29:54.445257  471179 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:29:54.448874  471179 system_pods.go:86] 8 kube-system pods found
	I1025 10:29:54.448907  471179 system_pods.go:89] "coredns-5dd5756b68-mp4xx" [339b3875-9aea-4d9d-bd92-87082f232a5e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:29:54.448914  471179 system_pods.go:89] "etcd-old-k8s-version-610853" [dae3daf5-67f7-4923-a88e-d0c16e57bb45] Running
	I1025 10:29:54.448921  471179 system_pods.go:89] "kindnet-vgctp" [6092762b-d84b-4455-aac9-b17e1c0b90e6] Running
	I1025 10:29:54.448925  471179 system_pods.go:89] "kube-apiserver-old-k8s-version-610853" [8021fc38-9ddb-4c3f-b620-c0e814e3b933] Running
	I1025 10:29:54.448931  471179 system_pods.go:89] "kube-controller-manager-old-k8s-version-610853" [c59c26ce-37b0-4bc8-bb8d-1f8cebc32435] Running
	I1025 10:29:54.448935  471179 system_pods.go:89] "kube-proxy-pvxrq" [ea082c00-6806-45fc-96a0-de6cbe2b9afd] Running
	I1025 10:29:54.448939  471179 system_pods.go:89] "kube-scheduler-old-k8s-version-610853" [d566f4ab-65a2-4313-84a7-201466e94cc1] Running
	I1025 10:29:54.448946  471179 system_pods.go:89] "storage-provisioner" [7f2741b4-bcad-4266-9634-4b2aee05a1d7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:29:54.448970  471179 retry.go:31] will retry after 237.112176ms: missing components: kube-dns
	I1025 10:29:54.690803  471179 system_pods.go:86] 8 kube-system pods found
	I1025 10:29:54.690881  471179 system_pods.go:89] "coredns-5dd5756b68-mp4xx" [339b3875-9aea-4d9d-bd92-87082f232a5e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:29:54.690903  471179 system_pods.go:89] "etcd-old-k8s-version-610853" [dae3daf5-67f7-4923-a88e-d0c16e57bb45] Running
	I1025 10:29:54.690930  471179 system_pods.go:89] "kindnet-vgctp" [6092762b-d84b-4455-aac9-b17e1c0b90e6] Running
	I1025 10:29:54.690962  471179 system_pods.go:89] "kube-apiserver-old-k8s-version-610853" [8021fc38-9ddb-4c3f-b620-c0e814e3b933] Running
	I1025 10:29:54.690990  471179 system_pods.go:89] "kube-controller-manager-old-k8s-version-610853" [c59c26ce-37b0-4bc8-bb8d-1f8cebc32435] Running
	I1025 10:29:54.691016  471179 system_pods.go:89] "kube-proxy-pvxrq" [ea082c00-6806-45fc-96a0-de6cbe2b9afd] Running
	I1025 10:29:54.691041  471179 system_pods.go:89] "kube-scheduler-old-k8s-version-610853" [d566f4ab-65a2-4313-84a7-201466e94cc1] Running
	I1025 10:29:54.691078  471179 system_pods.go:89] "storage-provisioner" [7f2741b4-bcad-4266-9634-4b2aee05a1d7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:29:54.691116  471179 retry.go:31] will retry after 272.595732ms: missing components: kube-dns
	I1025 10:29:54.968317  471179 system_pods.go:86] 8 kube-system pods found
	I1025 10:29:54.968351  471179 system_pods.go:89] "coredns-5dd5756b68-mp4xx" [339b3875-9aea-4d9d-bd92-87082f232a5e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:29:54.968358  471179 system_pods.go:89] "etcd-old-k8s-version-610853" [dae3daf5-67f7-4923-a88e-d0c16e57bb45] Running
	I1025 10:29:54.968364  471179 system_pods.go:89] "kindnet-vgctp" [6092762b-d84b-4455-aac9-b17e1c0b90e6] Running
	I1025 10:29:54.968369  471179 system_pods.go:89] "kube-apiserver-old-k8s-version-610853" [8021fc38-9ddb-4c3f-b620-c0e814e3b933] Running
	I1025 10:29:54.968375  471179 system_pods.go:89] "kube-controller-manager-old-k8s-version-610853" [c59c26ce-37b0-4bc8-bb8d-1f8cebc32435] Running
	I1025 10:29:54.968379  471179 system_pods.go:89] "kube-proxy-pvxrq" [ea082c00-6806-45fc-96a0-de6cbe2b9afd] Running
	I1025 10:29:54.968383  471179 system_pods.go:89] "kube-scheduler-old-k8s-version-610853" [d566f4ab-65a2-4313-84a7-201466e94cc1] Running
	I1025 10:29:54.968389  471179 system_pods.go:89] "storage-provisioner" [7f2741b4-bcad-4266-9634-4b2aee05a1d7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:29:54.968409  471179 retry.go:31] will retry after 380.153936ms: missing components: kube-dns
	I1025 10:29:55.352697  471179 system_pods.go:86] 8 kube-system pods found
	I1025 10:29:55.352732  471179 system_pods.go:89] "coredns-5dd5756b68-mp4xx" [339b3875-9aea-4d9d-bd92-87082f232a5e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:29:55.352739  471179 system_pods.go:89] "etcd-old-k8s-version-610853" [dae3daf5-67f7-4923-a88e-d0c16e57bb45] Running
	I1025 10:29:55.352745  471179 system_pods.go:89] "kindnet-vgctp" [6092762b-d84b-4455-aac9-b17e1c0b90e6] Running
	I1025 10:29:55.352750  471179 system_pods.go:89] "kube-apiserver-old-k8s-version-610853" [8021fc38-9ddb-4c3f-b620-c0e814e3b933] Running
	I1025 10:29:55.352755  471179 system_pods.go:89] "kube-controller-manager-old-k8s-version-610853" [c59c26ce-37b0-4bc8-bb8d-1f8cebc32435] Running
	I1025 10:29:55.352758  471179 system_pods.go:89] "kube-proxy-pvxrq" [ea082c00-6806-45fc-96a0-de6cbe2b9afd] Running
	I1025 10:29:55.352763  471179 system_pods.go:89] "kube-scheduler-old-k8s-version-610853" [d566f4ab-65a2-4313-84a7-201466e94cc1] Running
	I1025 10:29:55.352769  471179 system_pods.go:89] "storage-provisioner" [7f2741b4-bcad-4266-9634-4b2aee05a1d7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:29:55.352790  471179 retry.go:31] will retry after 471.157637ms: missing components: kube-dns
	I1025 10:29:55.828260  471179 system_pods.go:86] 8 kube-system pods found
	I1025 10:29:55.828289  471179 system_pods.go:89] "coredns-5dd5756b68-mp4xx" [339b3875-9aea-4d9d-bd92-87082f232a5e] Running
	I1025 10:29:55.828295  471179 system_pods.go:89] "etcd-old-k8s-version-610853" [dae3daf5-67f7-4923-a88e-d0c16e57bb45] Running
	I1025 10:29:55.828300  471179 system_pods.go:89] "kindnet-vgctp" [6092762b-d84b-4455-aac9-b17e1c0b90e6] Running
	I1025 10:29:55.828305  471179 system_pods.go:89] "kube-apiserver-old-k8s-version-610853" [8021fc38-9ddb-4c3f-b620-c0e814e3b933] Running
	I1025 10:29:55.828311  471179 system_pods.go:89] "kube-controller-manager-old-k8s-version-610853" [c59c26ce-37b0-4bc8-bb8d-1f8cebc32435] Running
	I1025 10:29:55.828327  471179 system_pods.go:89] "kube-proxy-pvxrq" [ea082c00-6806-45fc-96a0-de6cbe2b9afd] Running
	I1025 10:29:55.828331  471179 system_pods.go:89] "kube-scheduler-old-k8s-version-610853" [d566f4ab-65a2-4313-84a7-201466e94cc1] Running
	I1025 10:29:55.828336  471179 system_pods.go:89] "storage-provisioner" [7f2741b4-bcad-4266-9634-4b2aee05a1d7] Running
	I1025 10:29:55.828343  471179 system_pods.go:126] duration metric: took 1.383080326s to wait for k8s-apps to be running ...
	I1025 10:29:55.828351  471179 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:29:55.828406  471179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:29:55.840886  471179 system_svc.go:56] duration metric: took 12.525837ms WaitForService to wait for kubelet
	I1025 10:29:55.840913  471179 kubeadm.go:586] duration metric: took 15.300329557s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:29:55.840933  471179 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:29:55.843505  471179 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:29:55.843535  471179 node_conditions.go:123] node cpu capacity is 2
	I1025 10:29:55.843548  471179 node_conditions.go:105] duration metric: took 2.609231ms to run NodePressure ...
	I1025 10:29:55.843560  471179 start.go:241] waiting for startup goroutines ...
	I1025 10:29:55.843567  471179 start.go:246] waiting for cluster config update ...
	I1025 10:29:55.843579  471179 start.go:255] writing updated cluster config ...
	I1025 10:29:55.843872  471179 ssh_runner.go:195] Run: rm -f paused
	I1025 10:29:55.847246  471179 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:29:55.851275  471179 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-mp4xx" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:29:55.856085  471179 pod_ready.go:94] pod "coredns-5dd5756b68-mp4xx" is "Ready"
	I1025 10:29:55.856115  471179 pod_ready.go:86] duration metric: took 4.818515ms for pod "coredns-5dd5756b68-mp4xx" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:29:55.859122  471179 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-610853" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:29:55.863692  471179 pod_ready.go:94] pod "etcd-old-k8s-version-610853" is "Ready"
	I1025 10:29:55.863724  471179 pod_ready.go:86] duration metric: took 4.577649ms for pod "etcd-old-k8s-version-610853" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:29:55.867353  471179 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-610853" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:29:55.871778  471179 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-610853" is "Ready"
	I1025 10:29:55.871804  471179 pod_ready.go:86] duration metric: took 4.424048ms for pod "kube-apiserver-old-k8s-version-610853" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:29:55.874583  471179 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-610853" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:29:56.251845  471179 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-610853" is "Ready"
	I1025 10:29:56.251930  471179 pod_ready.go:86] duration metric: took 377.324504ms for pod "kube-controller-manager-old-k8s-version-610853" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:29:56.452101  471179 pod_ready.go:83] waiting for pod "kube-proxy-pvxrq" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:29:56.851276  471179 pod_ready.go:94] pod "kube-proxy-pvxrq" is "Ready"
	I1025 10:29:56.851305  471179 pod_ready.go:86] duration metric: took 399.171287ms for pod "kube-proxy-pvxrq" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:29:57.052149  471179 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-610853" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:29:57.451711  471179 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-610853" is "Ready"
	I1025 10:29:57.451739  471179 pod_ready.go:86] duration metric: took 399.51498ms for pod "kube-scheduler-old-k8s-version-610853" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:29:57.451752  471179 pod_ready.go:40] duration metric: took 1.604479209s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:29:57.524609  471179 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1025 10:29:57.527883  471179 out.go:203] 
	W1025 10:29:57.530864  471179 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1025 10:29:57.533872  471179 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1025 10:29:57.537841  471179 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-610853" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 10:29:54 old-k8s-version-610853 crio[840]: time="2025-10-25T10:29:54.61787941Z" level=info msg="Created container 366ad831bfe2984a098a21a9a313989626eedf16b568256b5e6c6919a9d4b627: kube-system/coredns-5dd5756b68-mp4xx/coredns" id=63f804ec-7312-4ad1-b179-739997a5fa19 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:29:54 old-k8s-version-610853 crio[840]: time="2025-10-25T10:29:54.618785695Z" level=info msg="Starting container: 366ad831bfe2984a098a21a9a313989626eedf16b568256b5e6c6919a9d4b627" id=f67e8224-d3cd-4b34-ae93-c1ade1372074 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:29:54 old-k8s-version-610853 crio[840]: time="2025-10-25T10:29:54.623563683Z" level=info msg="Started container" PID=1957 containerID=366ad831bfe2984a098a21a9a313989626eedf16b568256b5e6c6919a9d4b627 description=kube-system/coredns-5dd5756b68-mp4xx/coredns id=f67e8224-d3cd-4b34-ae93-c1ade1372074 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0d8c7bdb7ce2fd03301d0327ef382d3572ed176da3e30b945dd40437bcd87fcb
	Oct 25 10:29:58 old-k8s-version-610853 crio[840]: time="2025-10-25T10:29:58.948039214Z" level=info msg="Running pod sandbox: default/busybox/POD" id=27d3918d-0158-4d39-a65e-facfac99e4ce name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:29:58 old-k8s-version-610853 crio[840]: time="2025-10-25T10:29:58.948118723Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:29:58 old-k8s-version-610853 crio[840]: time="2025-10-25T10:29:58.953359043Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:50c434161c55dc2235755f6b9a2608168306054a60b138ab63ec511e1756d5d7 UID:ddea46f9-0802-490e-98fa-48636d4ec6e5 NetNS:/var/run/netns/9526d421-1165-4719-8a01-385a60f4db77 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079898}] Aliases:map[]}"
	Oct 25 10:29:58 old-k8s-version-610853 crio[840]: time="2025-10-25T10:29:58.953510422Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 25 10:29:58 old-k8s-version-610853 crio[840]: time="2025-10-25T10:29:58.96369949Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:50c434161c55dc2235755f6b9a2608168306054a60b138ab63ec511e1756d5d7 UID:ddea46f9-0802-490e-98fa-48636d4ec6e5 NetNS:/var/run/netns/9526d421-1165-4719-8a01-385a60f4db77 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079898}] Aliases:map[]}"
	Oct 25 10:29:58 old-k8s-version-610853 crio[840]: time="2025-10-25T10:29:58.96405609Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 25 10:29:58 old-k8s-version-610853 crio[840]: time="2025-10-25T10:29:58.97009382Z" level=info msg="Ran pod sandbox 50c434161c55dc2235755f6b9a2608168306054a60b138ab63ec511e1756d5d7 with infra container: default/busybox/POD" id=27d3918d-0158-4d39-a65e-facfac99e4ce name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:29:58 old-k8s-version-610853 crio[840]: time="2025-10-25T10:29:58.971290421Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=61052059-c2a5-4b01-a763-5f7dccf05b54 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:29:58 old-k8s-version-610853 crio[840]: time="2025-10-25T10:29:58.971417053Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=61052059-c2a5-4b01-a763-5f7dccf05b54 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:29:58 old-k8s-version-610853 crio[840]: time="2025-10-25T10:29:58.971458645Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=61052059-c2a5-4b01-a763-5f7dccf05b54 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:29:58 old-k8s-version-610853 crio[840]: time="2025-10-25T10:29:58.971962717Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8f0babcf-e523-4ef1-ab55-d5af2a8d5a03 name=/runtime.v1.ImageService/PullImage
	Oct 25 10:29:58 old-k8s-version-610853 crio[840]: time="2025-10-25T10:29:58.975055753Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 25 10:30:01 old-k8s-version-610853 crio[840]: time="2025-10-25T10:30:01.217200123Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=8f0babcf-e523-4ef1-ab55-d5af2a8d5a03 name=/runtime.v1.ImageService/PullImage
	Oct 25 10:30:01 old-k8s-version-610853 crio[840]: time="2025-10-25T10:30:01.222450717Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=68078ccc-b54f-4254-a84a-8545901e9cf0 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:30:01 old-k8s-version-610853 crio[840]: time="2025-10-25T10:30:01.230375912Z" level=info msg="Creating container: default/busybox/busybox" id=6967a9a7-c3c6-44d9-9779-6c74eb9c78fe name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:30:01 old-k8s-version-610853 crio[840]: time="2025-10-25T10:30:01.230500107Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:30:01 old-k8s-version-610853 crio[840]: time="2025-10-25T10:30:01.243685874Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:30:01 old-k8s-version-610853 crio[840]: time="2025-10-25T10:30:01.244297787Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:30:01 old-k8s-version-610853 crio[840]: time="2025-10-25T10:30:01.273820286Z" level=info msg="Created container 2ebac7806f252956f32ac8f046196306b2dd6fcea96b4bb11ac1b3b93a61ff92: default/busybox/busybox" id=6967a9a7-c3c6-44d9-9779-6c74eb9c78fe name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:30:01 old-k8s-version-610853 crio[840]: time="2025-10-25T10:30:01.277177334Z" level=info msg="Starting container: 2ebac7806f252956f32ac8f046196306b2dd6fcea96b4bb11ac1b3b93a61ff92" id=b193fc68-b23a-4b7f-bbb1-ff6d21313d16 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:30:01 old-k8s-version-610853 crio[840]: time="2025-10-25T10:30:01.282266999Z" level=info msg="Started container" PID=2013 containerID=2ebac7806f252956f32ac8f046196306b2dd6fcea96b4bb11ac1b3b93a61ff92 description=default/busybox/busybox id=b193fc68-b23a-4b7f-bbb1-ff6d21313d16 name=/runtime.v1.RuntimeService/StartContainer sandboxID=50c434161c55dc2235755f6b9a2608168306054a60b138ab63ec511e1756d5d7
	Oct 25 10:30:06 old-k8s-version-610853 crio[840]: time="2025-10-25T10:30:06.919584901Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	2ebac7806f252       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   50c434161c55d       busybox                                          default
	366ad831bfe29       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      13 seconds ago      Running             coredns                   0                   0d8c7bdb7ce2f       coredns-5dd5756b68-mp4xx                         kube-system
	a078a08e4a13d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago      Running             storage-provisioner       0                   45ba7bcc3ece8       storage-provisioner                              kube-system
	b9235906a667d       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    24 seconds ago      Running             kindnet-cni               0                   63597da2ced23       kindnet-vgctp                                    kube-system
	e09945255d34e       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      27 seconds ago      Running             kube-proxy                0                   5e28fc3231a7d       kube-proxy-pvxrq                                 kube-system
	088f155c90c0b       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      48 seconds ago      Running             kube-controller-manager   0                   9bd57bd60a781       kube-controller-manager-old-k8s-version-610853   kube-system
	92b534033d49b       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      48 seconds ago      Running             kube-apiserver            0                   749b0523d8e54       kube-apiserver-old-k8s-version-610853            kube-system
	3031cedf44f7b       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      48 seconds ago      Running             etcd                      0                   96971431e5571       etcd-old-k8s-version-610853                      kube-system
	6c5d13849a27a       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      48 seconds ago      Running             kube-scheduler            0                   5dc5429a40c03       kube-scheduler-old-k8s-version-610853            kube-system
	
	
	==> coredns [366ad831bfe2984a098a21a9a313989626eedf16b568256b5e6c6919a9d4b627] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39846 - 25802 "HINFO IN 67122743245163170.4027773233063650668. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.014067329s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-610853
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-610853
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=old-k8s-version-610853
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_29_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:29:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-610853
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:30:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:29:58 +0000   Sat, 25 Oct 2025 10:29:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:29:58 +0000   Sat, 25 Oct 2025 10:29:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:29:58 +0000   Sat, 25 Oct 2025 10:29:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:29:58 +0000   Sat, 25 Oct 2025 10:29:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-610853
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                16c3fb75-2c85-4847-b008-4bbd6334ab71
	  Boot ID:                    3729fb0b-b441-4078-a4f7-ae0fb40e9fa4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-mp4xx                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-old-k8s-version-610853                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         43s
	  kube-system                 kindnet-vgctp                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-610853             250m (12%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-610853    200m (10%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-pvxrq                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-610853             100m (5%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  NodeHasSufficientMemory  49s (x8 over 49s)  kubelet          Node old-k8s-version-610853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    49s (x8 over 49s)  kubelet          Node old-k8s-version-610853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     49s (x8 over 49s)  kubelet          Node old-k8s-version-610853 status is now: NodeHasSufficientPID
	  Normal  Starting                 41s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s                kubelet          Node old-k8s-version-610853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s                kubelet          Node old-k8s-version-610853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s                kubelet          Node old-k8s-version-610853 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node old-k8s-version-610853 event: Registered Node old-k8s-version-610853 in Controller
	  Normal  NodeReady                14s                kubelet          Node old-k8s-version-610853 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct25 10:03] overlayfs: idmapped layers are currently not supported
	[Oct25 10:04] overlayfs: idmapped layers are currently not supported
	[Oct25 10:09] overlayfs: idmapped layers are currently not supported
	[ +31.156008] overlayfs: idmapped layers are currently not supported
	[Oct25 10:11] overlayfs: idmapped layers are currently not supported
	[Oct25 10:12] overlayfs: idmapped layers are currently not supported
	[Oct25 10:13] overlayfs: idmapped layers are currently not supported
	[Oct25 10:14] overlayfs: idmapped layers are currently not supported
	[Oct25 10:15] overlayfs: idmapped layers are currently not supported
	[ +16.057450] overlayfs: idmapped layers are currently not supported
	[Oct25 10:16] overlayfs: idmapped layers are currently not supported
	[ +24.917476] overlayfs: idmapped layers are currently not supported
	[Oct25 10:17] overlayfs: idmapped layers are currently not supported
	[ +27.290615] overlayfs: idmapped layers are currently not supported
	[Oct25 10:19] overlayfs: idmapped layers are currently not supported
	[Oct25 10:20] overlayfs: idmapped layers are currently not supported
	[Oct25 10:21] overlayfs: idmapped layers are currently not supported
	[Oct25 10:22] overlayfs: idmapped layers are currently not supported
	[ +31.000692] overlayfs: idmapped layers are currently not supported
	[Oct25 10:24] overlayfs: idmapped layers are currently not supported
	[Oct25 10:26] overlayfs: idmapped layers are currently not supported
	[Oct25 10:27] overlayfs: idmapped layers are currently not supported
	[Oct25 10:28] overlayfs: idmapped layers are currently not supported
	[ +15.077774] overlayfs: idmapped layers are currently not supported
	[Oct25 10:29] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [3031cedf44f7bae13b9e53a55d0368b72599598a9314ef0148fedd5493163b1e] <==
	{"level":"info","ts":"2025-10-25T10:29:19.580542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-25T10:29:19.580685Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-25T10:29:19.583993Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-25T10:29:19.584152Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-25T10:29:19.584695Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-25T10:29:19.585381Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-25T10:29:19.585452Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-25T10:29:20.363207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-25T10:29:20.363335Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-25T10:29:20.363375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-10-25T10:29:20.363448Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-10-25T10:29:20.36348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-25T10:29:20.363528Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-10-25T10:29:20.363561Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-25T10:29:20.364826Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T10:29:20.366185Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-610853 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-25T10:29:20.366277Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-25T10:29:20.36694Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T10:29:20.367024Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T10:29:20.36705Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T10:29:20.367167Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-25T10:29:20.367595Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-25T10:29:20.367989Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-25T10:29:20.368049Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-25T10:29:20.404593Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 10:30:08 up  2:12,  0 user,  load average: 3.25, 3.78, 3.07
	Linux old-k8s-version-610853 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b9235906a667dca0a6c900074ffba479973861664c299c5d18a2a7019f980d53] <==
	I1025 10:29:43.778583       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:29:43.778890       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 10:29:43.779026       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:29:43.779037       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:29:43.779051       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:29:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:29:43.981166       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:29:43.981248       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:29:43.981283       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:29:43.981639       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 10:29:44.181368       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:29:44.181456       1 metrics.go:72] Registering metrics
	I1025 10:29:44.181529       1 controller.go:711] "Syncing nftables rules"
	I1025 10:29:53.981868       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:29:53.981908       1 main.go:301] handling current node
	I1025 10:30:03.981974       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:30:03.982014       1 main.go:301] handling current node
	
	
	==> kube-apiserver [92b534033d49bdc2dd3f6df8eaf3e1e8bf885ce1941da27893e4ad98284b3def] <==
	I1025 10:29:24.243469       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1025 10:29:24.243469       1 controller.go:624] quota admission added evaluator for: namespaces
	I1025 10:29:24.244068       1 shared_informer.go:318] Caches are synced for configmaps
	I1025 10:29:24.259275       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1025 10:29:24.261467       1 aggregator.go:166] initial CRD sync complete...
	I1025 10:29:24.263509       1 autoregister_controller.go:141] Starting autoregister controller
	I1025 10:29:24.263566       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 10:29:24.263606       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:29:24.307350       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1025 10:29:24.387308       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:29:24.640271       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 10:29:24.648158       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 10:29:24.648242       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:29:25.393913       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:29:25.444301       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:29:25.504322       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 10:29:25.516745       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1025 10:29:25.517964       1 controller.go:624] quota admission added evaluator for: endpoints
	I1025 10:29:25.523636       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:29:26.452991       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1025 10:29:27.022555       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1025 10:29:27.061178       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 10:29:27.077699       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1025 10:29:39.916765       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1025 10:29:40.019733       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [088f155c90c0b3d7b468cc47df3ac3a920c789ac7cbad40fa68da376967b043c] <==
	I1025 10:29:39.448946       1 shared_informer.go:318] Caches are synced for disruption
	I1025 10:29:39.483395       1 shared_informer.go:318] Caches are synced for resource quota
	I1025 10:29:39.861051       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 10:29:39.861081       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1025 10:29:39.886018       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 10:29:39.923411       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1025 10:29:40.034645       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-pvxrq"
	I1025 10:29:40.043047       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-vgctp"
	I1025 10:29:40.321578       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-2g4tk"
	I1025 10:29:40.337420       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-mp4xx"
	I1025 10:29:40.347070       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="423.507115ms"
	I1025 10:29:40.376033       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="28.71988ms"
	I1025 10:29:40.376132       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="55.139µs"
	I1025 10:29:40.401965       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="138.66µs"
	I1025 10:29:41.982461       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1025 10:29:42.044505       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-2g4tk"
	I1025 10:29:42.059367       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="76.529349ms"
	I1025 10:29:42.087213       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="27.79616ms"
	I1025 10:29:42.087306       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="56.14µs"
	I1025 10:29:54.251763       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="139.161µs"
	I1025 10:29:54.282779       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="96.658µs"
	I1025 10:29:54.389910       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1025 10:29:55.379264       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="205.27µs"
	I1025 10:29:55.438426       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="16.843241ms"
	I1025 10:29:55.438651       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="112.559µs"
	
	
	==> kube-proxy [e09945255d34e56eff96ccbc72b8651a9585c89367877c32155c59c5b1c95214] <==
	I1025 10:29:40.567551       1 server_others.go:69] "Using iptables proxy"
	I1025 10:29:40.647709       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1025 10:29:40.820738       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:29:40.842394       1 server_others.go:152] "Using iptables Proxier"
	I1025 10:29:40.842450       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1025 10:29:40.842459       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1025 10:29:40.842491       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1025 10:29:40.842713       1 server.go:846] "Version info" version="v1.28.0"
	I1025 10:29:40.842724       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:29:40.843864       1 config.go:188] "Starting service config controller"
	I1025 10:29:40.843884       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1025 10:29:40.843914       1 config.go:97] "Starting endpoint slice config controller"
	I1025 10:29:40.843918       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1025 10:29:40.853508       1 config.go:315] "Starting node config controller"
	I1025 10:29:40.853533       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1025 10:29:40.944030       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1025 10:29:40.944092       1 shared_informer.go:318] Caches are synced for service config
	I1025 10:29:40.954724       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [6c5d13849a27afc6f6b84effb2f6d197eb8dbce4a12fb4b658d6873c731eec25] <==
	W1025 10:29:24.817565       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1025 10:29:24.817594       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1025 10:29:24.817519       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1025 10:29:24.817642       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1025 10:29:24.817476       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1025 10:29:24.817668       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1025 10:29:24.821401       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1025 10:29:24.822307       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1025 10:29:24.821405       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1025 10:29:24.822420       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1025 10:29:24.821525       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1025 10:29:24.822494       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1025 10:29:24.821635       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1025 10:29:24.822586       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1025 10:29:24.821727       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1025 10:29:24.822653       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1025 10:29:24.821767       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1025 10:29:24.822910       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1025 10:29:24.821856       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1025 10:29:24.822974       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1025 10:29:24.821922       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1025 10:29:24.823035       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1025 10:29:24.828129       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1025 10:29:24.828178       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1025 10:29:25.709270       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 25 10:29:40 old-k8s-version-610853 kubelet[1382]: I1025 10:29:40.147928    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6092762b-d84b-4455-aac9-b17e1c0b90e6-xtables-lock\") pod \"kindnet-vgctp\" (UID: \"6092762b-d84b-4455-aac9-b17e1c0b90e6\") " pod="kube-system/kindnet-vgctp"
	Oct 25 10:29:40 old-k8s-version-610853 kubelet[1382]: I1025 10:29:40.147952    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6092762b-d84b-4455-aac9-b17e1c0b90e6-lib-modules\") pod \"kindnet-vgctp\" (UID: \"6092762b-d84b-4455-aac9-b17e1c0b90e6\") " pod="kube-system/kindnet-vgctp"
	Oct 25 10:29:40 old-k8s-version-610853 kubelet[1382]: I1025 10:29:40.147981    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r25sb\" (UniqueName: \"kubernetes.io/projected/6092762b-d84b-4455-aac9-b17e1c0b90e6-kube-api-access-r25sb\") pod \"kindnet-vgctp\" (UID: \"6092762b-d84b-4455-aac9-b17e1c0b90e6\") " pod="kube-system/kindnet-vgctp"
	Oct 25 10:29:40 old-k8s-version-610853 kubelet[1382]: I1025 10:29:40.148006    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea082c00-6806-45fc-96a0-de6cbe2b9afd-lib-modules\") pod \"kube-proxy-pvxrq\" (UID: \"ea082c00-6806-45fc-96a0-de6cbe2b9afd\") " pod="kube-system/kube-proxy-pvxrq"
	Oct 25 10:29:40 old-k8s-version-610853 kubelet[1382]: I1025 10:29:40.148032    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ea082c00-6806-45fc-96a0-de6cbe2b9afd-kube-proxy\") pod \"kube-proxy-pvxrq\" (UID: \"ea082c00-6806-45fc-96a0-de6cbe2b9afd\") " pod="kube-system/kube-proxy-pvxrq"
	Oct 25 10:29:40 old-k8s-version-610853 kubelet[1382]: I1025 10:29:40.148058    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvtng\" (UniqueName: \"kubernetes.io/projected/ea082c00-6806-45fc-96a0-de6cbe2b9afd-kube-api-access-nvtng\") pod \"kube-proxy-pvxrq\" (UID: \"ea082c00-6806-45fc-96a0-de6cbe2b9afd\") " pod="kube-system/kube-proxy-pvxrq"
	Oct 25 10:29:40 old-k8s-version-610853 kubelet[1382]: W1025 10:29:40.409320    1382 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/d9ac8e10f5b1ea2965bab805d608fa83def6ed75bd1273d0c136ee442b0b45b2/crio-63597da2ced23cc64da8b2c4b45fbf7f16ebb792dc12a80d8892baee19bf53b6 WatchSource:0}: Error finding container 63597da2ced23cc64da8b2c4b45fbf7f16ebb792dc12a80d8892baee19bf53b6: Status 404 returned error can't find the container with id 63597da2ced23cc64da8b2c4b45fbf7f16ebb792dc12a80d8892baee19bf53b6
	Oct 25 10:29:44 old-k8s-version-610853 kubelet[1382]: I1025 10:29:44.352966    1382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-pvxrq" podStartSLOduration=4.352911662 podCreationTimestamp="2025-10-25 10:29:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:29:41.352930745 +0000 UTC m=+14.372187985" watchObservedRunningTime="2025-10-25 10:29:44.352911662 +0000 UTC m=+17.372168902"
	Oct 25 10:29:54 old-k8s-version-610853 kubelet[1382]: I1025 10:29:54.188282    1382 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 25 10:29:54 old-k8s-version-610853 kubelet[1382]: I1025 10:29:54.227934    1382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-vgctp" podStartSLOduration=10.991769429 podCreationTimestamp="2025-10-25 10:29:40 +0000 UTC" firstStartedPulling="2025-10-25 10:29:40.412104263 +0000 UTC m=+13.431361503" lastFinishedPulling="2025-10-25 10:29:43.648210064 +0000 UTC m=+16.667467304" observedRunningTime="2025-10-25 10:29:44.355718385 +0000 UTC m=+17.374975625" watchObservedRunningTime="2025-10-25 10:29:54.22787523 +0000 UTC m=+27.247132478"
	Oct 25 10:29:54 old-k8s-version-610853 kubelet[1382]: I1025 10:29:54.228371    1382 topology_manager.go:215] "Topology Admit Handler" podUID="7f2741b4-bcad-4266-9634-4b2aee05a1d7" podNamespace="kube-system" podName="storage-provisioner"
	Oct 25 10:29:54 old-k8s-version-610853 kubelet[1382]: I1025 10:29:54.250203    1382 topology_manager.go:215] "Topology Admit Handler" podUID="339b3875-9aea-4d9d-bd92-87082f232a5e" podNamespace="kube-system" podName="coredns-5dd5756b68-mp4xx"
	Oct 25 10:29:54 old-k8s-version-610853 kubelet[1382]: I1025 10:29:54.270081    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7f2741b4-bcad-4266-9634-4b2aee05a1d7-tmp\") pod \"storage-provisioner\" (UID: \"7f2741b4-bcad-4266-9634-4b2aee05a1d7\") " pod="kube-system/storage-provisioner"
	Oct 25 10:29:54 old-k8s-version-610853 kubelet[1382]: I1025 10:29:54.270153    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln9sm\" (UniqueName: \"kubernetes.io/projected/7f2741b4-bcad-4266-9634-4b2aee05a1d7-kube-api-access-ln9sm\") pod \"storage-provisioner\" (UID: \"7f2741b4-bcad-4266-9634-4b2aee05a1d7\") " pod="kube-system/storage-provisioner"
	Oct 25 10:29:54 old-k8s-version-610853 kubelet[1382]: I1025 10:29:54.371094    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/339b3875-9aea-4d9d-bd92-87082f232a5e-config-volume\") pod \"coredns-5dd5756b68-mp4xx\" (UID: \"339b3875-9aea-4d9d-bd92-87082f232a5e\") " pod="kube-system/coredns-5dd5756b68-mp4xx"
	Oct 25 10:29:54 old-k8s-version-610853 kubelet[1382]: I1025 10:29:54.371672    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp2gf\" (UniqueName: \"kubernetes.io/projected/339b3875-9aea-4d9d-bd92-87082f232a5e-kube-api-access-sp2gf\") pod \"coredns-5dd5756b68-mp4xx\" (UID: \"339b3875-9aea-4d9d-bd92-87082f232a5e\") " pod="kube-system/coredns-5dd5756b68-mp4xx"
	Oct 25 10:29:54 old-k8s-version-610853 kubelet[1382]: W1025 10:29:54.539414    1382 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/d9ac8e10f5b1ea2965bab805d608fa83def6ed75bd1273d0c136ee442b0b45b2/crio-45ba7bcc3ece86bad2184d6739220a614a95eb4c753f98f4285340ffdb7d5cad WatchSource:0}: Error finding container 45ba7bcc3ece86bad2184d6739220a614a95eb4c753f98f4285340ffdb7d5cad: Status 404 returned error can't find the container with id 45ba7bcc3ece86bad2184d6739220a614a95eb4c753f98f4285340ffdb7d5cad
	Oct 25 10:29:54 old-k8s-version-610853 kubelet[1382]: W1025 10:29:54.579895    1382 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/d9ac8e10f5b1ea2965bab805d608fa83def6ed75bd1273d0c136ee442b0b45b2/crio-0d8c7bdb7ce2fd03301d0327ef382d3572ed176da3e30b945dd40437bcd87fcb WatchSource:0}: Error finding container 0d8c7bdb7ce2fd03301d0327ef382d3572ed176da3e30b945dd40437bcd87fcb: Status 404 returned error can't find the container with id 0d8c7bdb7ce2fd03301d0327ef382d3572ed176da3e30b945dd40437bcd87fcb
	Oct 25 10:29:55 old-k8s-version-610853 kubelet[1382]: I1025 10:29:55.404094    1382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-mp4xx" podStartSLOduration=15.404031333 podCreationTimestamp="2025-10-25 10:29:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:29:55.380348431 +0000 UTC m=+28.399605671" watchObservedRunningTime="2025-10-25 10:29:55.404031333 +0000 UTC m=+28.423288572"
	Oct 25 10:29:55 old-k8s-version-610853 kubelet[1382]: I1025 10:29:55.420533    1382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.420478146 podCreationTimestamp="2025-10-25 10:29:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:29:55.405065692 +0000 UTC m=+28.424322932" watchObservedRunningTime="2025-10-25 10:29:55.420478146 +0000 UTC m=+28.439735386"
	Oct 25 10:29:57 old-k8s-version-610853 kubelet[1382]: I1025 10:29:57.745924    1382 topology_manager.go:215] "Topology Admit Handler" podUID="ddea46f9-0802-490e-98fa-48636d4ec6e5" podNamespace="default" podName="busybox"
	Oct 25 10:29:57 old-k8s-version-610853 kubelet[1382]: W1025 10:29:57.753442    1382 reflector.go:535] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:old-k8s-version-610853" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-610853' and this object
	Oct 25 10:29:57 old-k8s-version-610853 kubelet[1382]: E1025 10:29:57.753493    1382 reflector.go:147] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:old-k8s-version-610853" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-610853' and this object
	Oct 25 10:29:57 old-k8s-version-610853 kubelet[1382]: I1025 10:29:57.794623    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs6zm\" (UniqueName: \"kubernetes.io/projected/ddea46f9-0802-490e-98fa-48636d4ec6e5-kube-api-access-bs6zm\") pod \"busybox\" (UID: \"ddea46f9-0802-490e-98fa-48636d4ec6e5\") " pod="default/busybox"
	Oct 25 10:29:58 old-k8s-version-610853 kubelet[1382]: W1025 10:29:58.966068    1382 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/d9ac8e10f5b1ea2965bab805d608fa83def6ed75bd1273d0c136ee442b0b45b2/crio-50c434161c55dc2235755f6b9a2608168306054a60b138ab63ec511e1756d5d7 WatchSource:0}: Error finding container 50c434161c55dc2235755f6b9a2608168306054a60b138ab63ec511e1756d5d7: Status 404 returned error can't find the container with id 50c434161c55dc2235755f6b9a2608168306054a60b138ab63ec511e1756d5d7
	
	
	==> storage-provisioner [a078a08e4a13d6449595b418eb080e7bdfaf2e89c78326abd4b8ccaf1bb3c14d] <==
	I1025 10:29:54.595935       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 10:29:54.625931       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 10:29:54.626002       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1025 10:29:54.648805       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 10:29:54.649586       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-610853_d4db7c85-2f8d-4772-83df-411ebe036621!
	I1025 10:29:54.649972       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"24744037-24e8-4570-96d7-2db397f7e01e", APIVersion:"v1", ResourceVersion:"433", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-610853_d4db7c85-2f8d-4772-83df-411ebe036621 became leader
	I1025 10:29:54.755811       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-610853_d4db7c85-2f8d-4772-83df-411ebe036621!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-610853 -n old-k8s-version-610853
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-610853 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-610853 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-610853 --alsologtostderr -v=1: exit status 80 (2.012785989s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-610853 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 10:31:24.935401  477010 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:31:24.935583  477010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:31:24.935595  477010 out.go:374] Setting ErrFile to fd 2...
	I1025 10:31:24.935600  477010 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:31:24.935877  477010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 10:31:24.936268  477010 out.go:368] Setting JSON to false
	I1025 10:31:24.936314  477010 mustload.go:65] Loading cluster: old-k8s-version-610853
	I1025 10:31:24.936830  477010 config.go:182] Loaded profile config "old-k8s-version-610853": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 10:31:24.937367  477010 cli_runner.go:164] Run: docker container inspect old-k8s-version-610853 --format={{.State.Status}}
	I1025 10:31:24.954363  477010 host.go:66] Checking if "old-k8s-version-610853" exists ...
	I1025 10:31:24.954683  477010 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:31:25.013127  477010 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-25 10:31:25.001585395 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:31:25.013930  477010 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-610853 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1025 10:31:25.017521  477010 out.go:179] * Pausing node old-k8s-version-610853 ... 
	I1025 10:31:25.020565  477010 host.go:66] Checking if "old-k8s-version-610853" exists ...
	I1025 10:31:25.020940  477010 ssh_runner.go:195] Run: systemctl --version
	I1025 10:31:25.020992  477010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610853
	I1025 10:31:25.046014  477010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33427 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/old-k8s-version-610853/id_rsa Username:docker}
	I1025 10:31:25.154094  477010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:31:25.171421  477010 pause.go:52] kubelet running: true
	I1025 10:31:25.171510  477010 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:31:25.418583  477010 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:31:25.418672  477010 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:31:25.507420  477010 cri.go:89] found id: "091e16d9863a0d528c1db558671f37699e8fef853fcec9f0ddb84719849a6993"
	I1025 10:31:25.507445  477010 cri.go:89] found id: "b4a063cf1b81586de3620a2e35b3fb766dfd73a20da17e5cb8ba258e8c2b2cfe"
	I1025 10:31:25.507451  477010 cri.go:89] found id: "2fff77ac5e928293117962aeb11abaa056b2eae73a468ecf54a7ac63f46f3a60"
	I1025 10:31:25.507455  477010 cri.go:89] found id: "727f02ebb4a44e8230fd46ee0e62a5d410cc1ab651fb405b54edd36cb5b76a9b"
	I1025 10:31:25.507459  477010 cri.go:89] found id: "ac4d66f1ea5f4c3937ad11f2c045143cdcc911646adefc4dc242e213d38acdda"
	I1025 10:31:25.507463  477010 cri.go:89] found id: "4e548687fb61e724d0492eca4c4b6af8ea0790732c7f2dbc6dd4670e9ee4e668"
	I1025 10:31:25.507466  477010 cri.go:89] found id: "ce552c2cb6e4e7361de884db9ef88fd97d4affae078257f79a846fb8bf14e468"
	I1025 10:31:25.507469  477010 cri.go:89] found id: "de79fc3d299d345572eba9b73c7727595ed89922ab269e444d5396b944bf1644"
	I1025 10:31:25.507473  477010 cri.go:89] found id: "01ef458479a2b900515d032a9c6b16080bf3eecf88bec56c1db80e3b57c927a1"
	I1025 10:31:25.507484  477010 cri.go:89] found id: "ffc56b496e33c1699567d30156945a2d74c4206c408d2790f67085f3a97bf118"
	I1025 10:31:25.507488  477010 cri.go:89] found id: "830e333170d03998870a50498589369cec9f3aec50bea277636833a4af430c9d"
	I1025 10:31:25.507491  477010 cri.go:89] found id: ""
	I1025 10:31:25.507545  477010 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:31:25.528029  477010 retry.go:31] will retry after 261.358364ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:31:25Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:31:25.790575  477010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:31:25.803871  477010 pause.go:52] kubelet running: false
	I1025 10:31:25.803974  477010 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:31:25.993591  477010 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:31:25.993682  477010 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:31:26.079823  477010 cri.go:89] found id: "091e16d9863a0d528c1db558671f37699e8fef853fcec9f0ddb84719849a6993"
	I1025 10:31:26.079845  477010 cri.go:89] found id: "b4a063cf1b81586de3620a2e35b3fb766dfd73a20da17e5cb8ba258e8c2b2cfe"
	I1025 10:31:26.079850  477010 cri.go:89] found id: "2fff77ac5e928293117962aeb11abaa056b2eae73a468ecf54a7ac63f46f3a60"
	I1025 10:31:26.079854  477010 cri.go:89] found id: "727f02ebb4a44e8230fd46ee0e62a5d410cc1ab651fb405b54edd36cb5b76a9b"
	I1025 10:31:26.079857  477010 cri.go:89] found id: "ac4d66f1ea5f4c3937ad11f2c045143cdcc911646adefc4dc242e213d38acdda"
	I1025 10:31:26.079861  477010 cri.go:89] found id: "4e548687fb61e724d0492eca4c4b6af8ea0790732c7f2dbc6dd4670e9ee4e668"
	I1025 10:31:26.079864  477010 cri.go:89] found id: "ce552c2cb6e4e7361de884db9ef88fd97d4affae078257f79a846fb8bf14e468"
	I1025 10:31:26.079877  477010 cri.go:89] found id: "de79fc3d299d345572eba9b73c7727595ed89922ab269e444d5396b944bf1644"
	I1025 10:31:26.079881  477010 cri.go:89] found id: "01ef458479a2b900515d032a9c6b16080bf3eecf88bec56c1db80e3b57c927a1"
	I1025 10:31:26.079890  477010 cri.go:89] found id: "ffc56b496e33c1699567d30156945a2d74c4206c408d2790f67085f3a97bf118"
	I1025 10:31:26.079894  477010 cri.go:89] found id: "830e333170d03998870a50498589369cec9f3aec50bea277636833a4af430c9d"
	I1025 10:31:26.079896  477010 cri.go:89] found id: ""
	I1025 10:31:26.079945  477010 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:31:26.092899  477010 retry.go:31] will retry after 494.692506ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:31:26Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:31:26.588698  477010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:31:26.602376  477010 pause.go:52] kubelet running: false
	I1025 10:31:26.602458  477010 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:31:26.785175  477010 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:31:26.785277  477010 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:31:26.857084  477010 cri.go:89] found id: "091e16d9863a0d528c1db558671f37699e8fef853fcec9f0ddb84719849a6993"
	I1025 10:31:26.857160  477010 cri.go:89] found id: "b4a063cf1b81586de3620a2e35b3fb766dfd73a20da17e5cb8ba258e8c2b2cfe"
	I1025 10:31:26.857179  477010 cri.go:89] found id: "2fff77ac5e928293117962aeb11abaa056b2eae73a468ecf54a7ac63f46f3a60"
	I1025 10:31:26.857201  477010 cri.go:89] found id: "727f02ebb4a44e8230fd46ee0e62a5d410cc1ab651fb405b54edd36cb5b76a9b"
	I1025 10:31:26.857233  477010 cri.go:89] found id: "ac4d66f1ea5f4c3937ad11f2c045143cdcc911646adefc4dc242e213d38acdda"
	I1025 10:31:26.857257  477010 cri.go:89] found id: "4e548687fb61e724d0492eca4c4b6af8ea0790732c7f2dbc6dd4670e9ee4e668"
	I1025 10:31:26.857279  477010 cri.go:89] found id: "ce552c2cb6e4e7361de884db9ef88fd97d4affae078257f79a846fb8bf14e468"
	I1025 10:31:26.857309  477010 cri.go:89] found id: "de79fc3d299d345572eba9b73c7727595ed89922ab269e444d5396b944bf1644"
	I1025 10:31:26.857326  477010 cri.go:89] found id: "01ef458479a2b900515d032a9c6b16080bf3eecf88bec56c1db80e3b57c927a1"
	I1025 10:31:26.857432  477010 cri.go:89] found id: "ffc56b496e33c1699567d30156945a2d74c4206c408d2790f67085f3a97bf118"
	I1025 10:31:26.857460  477010 cri.go:89] found id: "830e333170d03998870a50498589369cec9f3aec50bea277636833a4af430c9d"
	I1025 10:31:26.857490  477010 cri.go:89] found id: ""
	I1025 10:31:26.857568  477010 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:31:26.873051  477010 out.go:203] 
	W1025 10:31:26.876678  477010 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:31:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:31:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 10:31:26.876708  477010 out.go:285] * 
	* 
	W1025 10:31:26.883964  477010 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 10:31:26.887368  477010 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-610853 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-610853
helpers_test.go:243: (dbg) docker inspect old-k8s-version-610853:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d9ac8e10f5b1ea2965bab805d608fa83def6ed75bd1273d0c136ee442b0b45b2",
	        "Created": "2025-10-25T10:28:59.57788081Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 474923,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:30:21.931816315Z",
	            "FinishedAt": "2025-10-25T10:30:21.119343628Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/d9ac8e10f5b1ea2965bab805d608fa83def6ed75bd1273d0c136ee442b0b45b2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d9ac8e10f5b1ea2965bab805d608fa83def6ed75bd1273d0c136ee442b0b45b2/hostname",
	        "HostsPath": "/var/lib/docker/containers/d9ac8e10f5b1ea2965bab805d608fa83def6ed75bd1273d0c136ee442b0b45b2/hosts",
	        "LogPath": "/var/lib/docker/containers/d9ac8e10f5b1ea2965bab805d608fa83def6ed75bd1273d0c136ee442b0b45b2/d9ac8e10f5b1ea2965bab805d608fa83def6ed75bd1273d0c136ee442b0b45b2-json.log",
	        "Name": "/old-k8s-version-610853",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-610853:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-610853",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d9ac8e10f5b1ea2965bab805d608fa83def6ed75bd1273d0c136ee442b0b45b2",
	                "LowerDir": "/var/lib/docker/overlay2/f26c3023df3151bbd59006b3509ec301ab9c593728a2fdf00fb4b7492c86c22e-init/diff:/var/lib/docker/overlay2/56244b4f60319ba406f5ebe406da3f45bd5764c167111669ae2d84794cf94518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f26c3023df3151bbd59006b3509ec301ab9c593728a2fdf00fb4b7492c86c22e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f26c3023df3151bbd59006b3509ec301ab9c593728a2fdf00fb4b7492c86c22e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f26c3023df3151bbd59006b3509ec301ab9c593728a2fdf00fb4b7492c86c22e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-610853",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-610853/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-610853",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-610853",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-610853",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "43f5489946cf1f2b9808af7448809b548f726326e33114f919d07bc836c3a181",
	            "SandboxKey": "/var/run/docker/netns/43f5489946cf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-610853": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:d9:c1:96:b0:03",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "81225534a6ecbdb108a21a8d61134e13e2b296f3c48ec26db1c8d60aa1908e7c",
	                    "EndpointID": "24cca6b8f150e3abdf20ff03ab17f1f764325c02274f9f29bcfc659fe0a84923",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-610853",
	                        "d9ac8e10f5b1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-610853 -n old-k8s-version-610853
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-610853 -n old-k8s-version-610853: exit status 2 (363.665917ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-610853 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-610853 logs -n 25: (1.374433772s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-821614 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-821614             │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-821614 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-821614             │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-821614 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-821614             │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-821614 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-821614             │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-821614 sudo containerd config dump                                                                                                                                                                                                  │ cilium-821614             │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-821614 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-821614             │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-821614 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-821614             │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-821614 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-821614             │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-821614 sudo crio config                                                                                                                                                                                                             │ cilium-821614             │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │                     │
	│ delete  │ -p cilium-821614                                                                                                                                                                                                                              │ cilium-821614             │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │ 25 Oct 25 10:27 UTC │
	│ start   │ -p force-systemd-env-068963 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-068963  │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │ 25 Oct 25 10:28 UTC │
	│ delete  │ -p kubernetes-upgrade-845331                                                                                                                                                                                                                  │ kubernetes-upgrade-845331 │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │ 25 Oct 25 10:28 UTC │
	│ start   │ -p cert-expiration-313068 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-313068    │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:28 UTC │
	│ delete  │ -p force-systemd-env-068963                                                                                                                                                                                                                   │ force-systemd-env-068963  │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:28 UTC │
	│ start   │ -p cert-options-506318 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-506318       │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:28 UTC │
	│ ssh     │ cert-options-506318 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-506318       │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:28 UTC │
	│ ssh     │ -p cert-options-506318 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-506318       │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:28 UTC │
	│ delete  │ -p cert-options-506318                                                                                                                                                                                                                        │ cert-options-506318       │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:28 UTC │
	│ start   │ -p old-k8s-version-610853 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-610853    │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:29 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-610853 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-610853    │ jenkins │ v1.37.0 │ 25 Oct 25 10:30 UTC │                     │
	│ stop    │ -p old-k8s-version-610853 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-610853    │ jenkins │ v1.37.0 │ 25 Oct 25 10:30 UTC │ 25 Oct 25 10:30 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-610853 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-610853    │ jenkins │ v1.37.0 │ 25 Oct 25 10:30 UTC │ 25 Oct 25 10:30 UTC │
	│ start   │ -p old-k8s-version-610853 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-610853    │ jenkins │ v1.37.0 │ 25 Oct 25 10:30 UTC │ 25 Oct 25 10:31 UTC │
	│ image   │ old-k8s-version-610853 image list --format=json                                                                                                                                                                                               │ old-k8s-version-610853    │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │ 25 Oct 25 10:31 UTC │
	│ pause   │ -p old-k8s-version-610853 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-610853    │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:30:21
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:30:21.667394  474795 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:30:21.667530  474795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:30:21.667541  474795 out.go:374] Setting ErrFile to fd 2...
	I1025 10:30:21.667546  474795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:30:21.667820  474795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 10:30:21.668210  474795 out.go:368] Setting JSON to false
	I1025 10:30:21.669089  474795 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7972,"bootTime":1761380250,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1025 10:30:21.669152  474795 start.go:141] virtualization:  
	I1025 10:30:21.672307  474795 out.go:179] * [old-k8s-version-610853] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:30:21.676152  474795 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 10:30:21.676274  474795 notify.go:220] Checking for updates...
	I1025 10:30:21.682115  474795 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:30:21.685046  474795 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:30:21.687991  474795 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-292167/.minikube
	I1025 10:30:21.690880  474795 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:30:21.693796  474795 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:30:21.697237  474795 config.go:182] Loaded profile config "old-k8s-version-610853": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 10:30:21.700700  474795 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1025 10:30:21.703549  474795 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:30:21.727343  474795 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:30:21.727467  474795 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:30:21.783871  474795 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:30:21.775079126 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:30:21.783979  474795 docker.go:318] overlay module found
	I1025 10:30:21.787143  474795 out.go:179] * Using the docker driver based on existing profile
	I1025 10:30:21.790048  474795 start.go:305] selected driver: docker
	I1025 10:30:21.790068  474795 start.go:925] validating driver "docker" against &{Name:old-k8s-version-610853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-610853 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:30:21.790163  474795 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:30:21.790889  474795 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:30:21.848060  474795 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:30:21.839044432 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:30:21.848435  474795 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:30:21.848472  474795 cni.go:84] Creating CNI manager for ""
	I1025 10:30:21.848539  474795 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:30:21.848580  474795 start.go:349] cluster config:
	{Name:old-k8s-version-610853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-610853 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:30:21.851879  474795 out.go:179] * Starting "old-k8s-version-610853" primary control-plane node in "old-k8s-version-610853" cluster
	I1025 10:30:21.854577  474795 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:30:21.857455  474795 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:30:21.860135  474795 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 10:30:21.860196  474795 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1025 10:30:21.860209  474795 cache.go:58] Caching tarball of preloaded images
	I1025 10:30:21.860222  474795 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:30:21.860307  474795 preload.go:233] Found /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:30:21.860317  474795 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1025 10:30:21.860427  474795 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/config.json ...
	I1025 10:30:21.881018  474795 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:30:21.881043  474795 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:30:21.881057  474795 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:30:21.881082  474795 start.go:360] acquireMachinesLock for old-k8s-version-610853: {Name:mk4cf5d4a6d8178880fb3a10acdef15766144ca0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:30:21.881148  474795 start.go:364] duration metric: took 41.863µs to acquireMachinesLock for "old-k8s-version-610853"
	I1025 10:30:21.881173  474795 start.go:96] Skipping create...Using existing machine configuration
	I1025 10:30:21.881184  474795 fix.go:54] fixHost starting: 
	I1025 10:30:21.881474  474795 cli_runner.go:164] Run: docker container inspect old-k8s-version-610853 --format={{.State.Status}}
	I1025 10:30:21.897981  474795 fix.go:112] recreateIfNeeded on old-k8s-version-610853: state=Stopped err=<nil>
	W1025 10:30:21.898013  474795 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 10:30:21.901213  474795 out.go:252] * Restarting existing docker container for "old-k8s-version-610853" ...
	I1025 10:30:21.901311  474795 cli_runner.go:164] Run: docker start old-k8s-version-610853
	I1025 10:30:22.160439  474795 cli_runner.go:164] Run: docker container inspect old-k8s-version-610853 --format={{.State.Status}}
	I1025 10:30:22.185166  474795 kic.go:430] container "old-k8s-version-610853" state is running.
	I1025 10:30:22.185740  474795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-610853
	I1025 10:30:22.207694  474795 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/config.json ...
	I1025 10:30:22.207923  474795 machine.go:93] provisionDockerMachine start ...
	I1025 10:30:22.207986  474795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610853
	I1025 10:30:22.233454  474795 main.go:141] libmachine: Using SSH client type: native
	I1025 10:30:22.233781  474795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33427 <nil> <nil>}
	I1025 10:30:22.233792  474795 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:30:22.235830  474795 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1025 10:30:25.382689  474795 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-610853
	
	I1025 10:30:25.382818  474795 ubuntu.go:182] provisioning hostname "old-k8s-version-610853"
	I1025 10:30:25.382902  474795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610853
	I1025 10:30:25.399953  474795 main.go:141] libmachine: Using SSH client type: native
	I1025 10:30:25.400300  474795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33427 <nil> <nil>}
	I1025 10:30:25.400319  474795 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-610853 && echo "old-k8s-version-610853" | sudo tee /etc/hostname
	I1025 10:30:25.561928  474795 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-610853
	
	I1025 10:30:25.562044  474795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610853
	I1025 10:30:25.580685  474795 main.go:141] libmachine: Using SSH client type: native
	I1025 10:30:25.581013  474795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33427 <nil> <nil>}
	I1025 10:30:25.581038  474795 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-610853' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-610853/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-610853' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:30:25.731525  474795 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:30:25.731551  474795 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-292167/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-292167/.minikube}
	I1025 10:30:25.731573  474795 ubuntu.go:190] setting up certificates
	I1025 10:30:25.731582  474795 provision.go:84] configureAuth start
	I1025 10:30:25.731646  474795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-610853
	I1025 10:30:25.749805  474795 provision.go:143] copyHostCerts
	I1025 10:30:25.749872  474795 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem, removing ...
	I1025 10:30:25.749896  474795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem
	I1025 10:30:25.749974  474795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem (1123 bytes)
	I1025 10:30:25.750079  474795 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem, removing ...
	I1025 10:30:25.750129  474795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem
	I1025 10:30:25.750163  474795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem (1675 bytes)
	I1025 10:30:25.750221  474795 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem, removing ...
	I1025 10:30:25.750230  474795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem
	I1025 10:30:25.750257  474795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem (1082 bytes)
	I1025 10:30:25.750311  474795 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-610853 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-610853]
	I1025 10:30:26.555306  474795 provision.go:177] copyRemoteCerts
	I1025 10:30:26.555379  474795 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:30:26.555420  474795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610853
	I1025 10:30:26.573290  474795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33427 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/old-k8s-version-610853/id_rsa Username:docker}
	I1025 10:30:26.682920  474795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 10:30:26.700595  474795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1025 10:30:26.717422  474795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:30:26.734368  474795 provision.go:87] duration metric: took 1.002758613s to configureAuth
	I1025 10:30:26.734396  474795 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:30:26.734590  474795 config.go:182] Loaded profile config "old-k8s-version-610853": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 10:30:26.734695  474795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610853
	I1025 10:30:26.751794  474795 main.go:141] libmachine: Using SSH client type: native
	I1025 10:30:26.752128  474795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33427 <nil> <nil>}
	I1025 10:30:26.752150  474795 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:30:27.077387  474795 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:30:27.077408  474795 machine.go:96] duration metric: took 4.869468675s to provisionDockerMachine
	I1025 10:30:27.077418  474795 start.go:293] postStartSetup for "old-k8s-version-610853" (driver="docker")
	I1025 10:30:27.077445  474795 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:30:27.077515  474795 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:30:27.077558  474795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610853
	I1025 10:30:27.098352  474795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33427 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/old-k8s-version-610853/id_rsa Username:docker}
	I1025 10:30:27.203605  474795 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:30:27.206853  474795 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:30:27.206880  474795 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:30:27.206891  474795 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/addons for local assets ...
	I1025 10:30:27.206945  474795 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/files for local assets ...
	I1025 10:30:27.207023  474795 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem -> 2940172.pem in /etc/ssl/certs
	I1025 10:30:27.207131  474795 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:30:27.214777  474795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 10:30:27.232510  474795 start.go:296] duration metric: took 155.075678ms for postStartSetup
	I1025 10:30:27.232602  474795 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:30:27.232642  474795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610853
	I1025 10:30:27.249916  474795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33427 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/old-k8s-version-610853/id_rsa Username:docker}
	I1025 10:30:27.352609  474795 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:30:27.357440  474795 fix.go:56] duration metric: took 5.47624872s for fixHost
	I1025 10:30:27.357466  474795 start.go:83] releasing machines lock for "old-k8s-version-610853", held for 5.476304303s
	I1025 10:30:27.357553  474795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-610853
	I1025 10:30:27.377332  474795 ssh_runner.go:195] Run: cat /version.json
	I1025 10:30:27.377411  474795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610853
	I1025 10:30:27.377690  474795 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:30:27.377751  474795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610853
	I1025 10:30:27.395829  474795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33427 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/old-k8s-version-610853/id_rsa Username:docker}
	I1025 10:30:27.413582  474795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33427 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/old-k8s-version-610853/id_rsa Username:docker}
	I1025 10:30:27.507085  474795 ssh_runner.go:195] Run: systemctl --version
	I1025 10:30:27.598005  474795 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:30:27.640905  474795 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:30:27.645285  474795 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:30:27.645370  474795 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:30:27.653139  474795 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:30:27.653216  474795 start.go:495] detecting cgroup driver to use...
	I1025 10:30:27.653257  474795 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:30:27.653317  474795 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:30:27.668450  474795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:30:27.681676  474795 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:30:27.681771  474795 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:30:27.697565  474795 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:30:27.710039  474795 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:30:27.838088  474795 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:30:27.962956  474795 docker.go:234] disabling docker service ...
	I1025 10:30:27.963038  474795 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:30:27.978989  474795 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:30:27.992469  474795 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:30:28.118829  474795 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:30:28.235517  474795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:30:28.248294  474795 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:30:28.262434  474795 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1025 10:30:28.262519  474795 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:30:28.271985  474795 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:30:28.272055  474795 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:30:28.281277  474795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:30:28.290102  474795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:30:28.299016  474795 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:30:28.307399  474795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:30:28.316249  474795 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:30:28.324867  474795 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:30:28.333853  474795 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:30:28.341389  474795 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:30:28.348892  474795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:30:28.480909  474795 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:30:28.619687  474795 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:30:28.619807  474795 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:30:28.624048  474795 start.go:563] Will wait 60s for crictl version
	I1025 10:30:28.624186  474795 ssh_runner.go:195] Run: which crictl
	I1025 10:30:28.628124  474795 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:30:28.656491  474795 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:30:28.656635  474795 ssh_runner.go:195] Run: crio --version
	I1025 10:30:28.689161  474795 ssh_runner.go:195] Run: crio --version
	I1025 10:30:28.723436  474795 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1025 10:30:28.726350  474795 cli_runner.go:164] Run: docker network inspect old-k8s-version-610853 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:30:28.742882  474795 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1025 10:30:28.746514  474795 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:30:28.756533  474795 kubeadm.go:883] updating cluster {Name:old-k8s-version-610853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-610853 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:30:28.756651  474795 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 10:30:28.756705  474795 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:30:28.794572  474795 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:30:28.794597  474795 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:30:28.794659  474795 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:30:28.823129  474795 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:30:28.823185  474795 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:30:28.823193  474795 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1025 10:30:28.823300  474795 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-610853 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-610853 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:30:28.823385  474795 ssh_runner.go:195] Run: crio config
	I1025 10:30:28.896000  474795 cni.go:84] Creating CNI manager for ""
	I1025 10:30:28.896024  474795 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:30:28.896067  474795 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:30:28.896113  474795 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-610853 NodeName:old-k8s-version-610853 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:30:28.896261  474795 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-610853"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:30:28.896333  474795 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1025 10:30:28.904167  474795 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:30:28.904253  474795 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:30:28.912007  474795 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1025 10:30:28.925192  474795 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:30:28.938665  474795 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1025 10:30:28.951617  474795 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:30:28.955071  474795 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:30:28.964538  474795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:30:29.074224  474795 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:30:29.089856  474795 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853 for IP: 192.168.85.2
	I1025 10:30:29.089926  474795 certs.go:195] generating shared ca certs ...
	I1025 10:30:29.089957  474795 certs.go:227] acquiring lock for ca certs: {Name:mk9438893ca511bbb3aa6154ee7b6f94b409696d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:30:29.090157  474795 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key
	I1025 10:30:29.090246  474795 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key
	I1025 10:30:29.090269  474795 certs.go:257] generating profile certs ...
	I1025 10:30:29.090407  474795 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/client.key
	I1025 10:30:29.090501  474795 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/apiserver.key.132f89be
	I1025 10:30:29.090576  474795 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/proxy-client.key
	I1025 10:30:29.090734  474795 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem (1338 bytes)
	W1025 10:30:29.090810  474795 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017_empty.pem, impossibly tiny 0 bytes
	I1025 10:30:29.090837  474795 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 10:30:29.090901  474795 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem (1082 bytes)
	I1025 10:30:29.090955  474795 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:30:29.091005  474795 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem (1675 bytes)
	I1025 10:30:29.091082  474795 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 10:30:29.091889  474795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:30:29.115836  474795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:30:29.133637  474795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:30:29.151405  474795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:30:29.170766  474795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1025 10:30:29.189296  474795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:30:29.207620  474795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:30:29.227302  474795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:30:29.247293  474795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem --> /usr/share/ca-certificates/294017.pem (1338 bytes)
	I1025 10:30:29.265777  474795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /usr/share/ca-certificates/2940172.pem (1708 bytes)
	I1025 10:30:29.290597  474795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:30:29.320376  474795 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:30:29.333907  474795 ssh_runner.go:195] Run: openssl version
	I1025 10:30:29.344851  474795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294017.pem && ln -fs /usr/share/ca-certificates/294017.pem /etc/ssl/certs/294017.pem"
	I1025 10:30:29.353926  474795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294017.pem
	I1025 10:30:29.357628  474795 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:39 /usr/share/ca-certificates/294017.pem
	I1025 10:30:29.357742  474795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294017.pem
	I1025 10:30:29.400300  474795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294017.pem /etc/ssl/certs/51391683.0"
	I1025 10:30:29.407909  474795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940172.pem && ln -fs /usr/share/ca-certificates/2940172.pem /etc/ssl/certs/2940172.pem"
	I1025 10:30:29.415845  474795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940172.pem
	I1025 10:30:29.420413  474795 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:39 /usr/share/ca-certificates/2940172.pem
	I1025 10:30:29.420528  474795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940172.pem
	I1025 10:30:29.463036  474795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940172.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:30:29.471564  474795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:30:29.480259  474795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:30:29.484123  474795 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:30:29.484239  474795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:30:29.525244  474795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:30:29.532938  474795 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:30:29.536811  474795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:30:29.578254  474795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:30:29.619220  474795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:30:29.660208  474795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:30:29.701380  474795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:30:29.750979  474795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:30:29.801708  474795 kubeadm.go:400] StartCluster: {Name:old-k8s-version-610853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-610853 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:30:29.801795  474795 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:30:29.801854  474795 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:30:29.857117  474795 cri.go:89] found id: ""
	I1025 10:30:29.857196  474795 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:30:29.875564  474795 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:30:29.875585  474795 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:30:29.875650  474795 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:30:29.885643  474795 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:30:29.886197  474795 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-610853" does not appear in /home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:30:29.886451  474795 kubeconfig.go:62] /home/jenkins/minikube-integration/21794-292167/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-610853" cluster setting kubeconfig missing "old-k8s-version-610853" context setting]
	I1025 10:30:29.886949  474795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/kubeconfig: {Name:mk3dba620aff9c31a3afd43d9db4d9ff5be75367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:30:29.888756  474795 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:30:29.904651  474795 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1025 10:30:29.904690  474795 kubeadm.go:601] duration metric: took 29.090124ms to restartPrimaryControlPlane
	I1025 10:30:29.904704  474795 kubeadm.go:402] duration metric: took 103.004938ms to StartCluster
	I1025 10:30:29.904719  474795 settings.go:142] acquiring lock: {Name:mkea67a33832f2ea491a4c745ccd836174c61655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:30:29.904784  474795 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:30:29.905772  474795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/kubeconfig: {Name:mk3dba620aff9c31a3afd43d9db4d9ff5be75367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:30:29.905972  474795 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:30:29.906456  474795 config.go:182] Loaded profile config "old-k8s-version-610853": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 10:30:29.906423  474795 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:30:29.906800  474795 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-610853"
	I1025 10:30:29.906882  474795 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-610853"
	W1025 10:30:29.906909  474795 addons.go:247] addon storage-provisioner should already be in state true
	I1025 10:30:29.907006  474795 host.go:66] Checking if "old-k8s-version-610853" exists ...
	I1025 10:30:29.906806  474795 addons.go:69] Setting dashboard=true in profile "old-k8s-version-610853"
	I1025 10:30:29.907081  474795 addons.go:238] Setting addon dashboard=true in "old-k8s-version-610853"
	W1025 10:30:29.907096  474795 addons.go:247] addon dashboard should already be in state true
	I1025 10:30:29.907121  474795 host.go:66] Checking if "old-k8s-version-610853" exists ...
	I1025 10:30:29.907598  474795 cli_runner.go:164] Run: docker container inspect old-k8s-version-610853 --format={{.State.Status}}
	I1025 10:30:29.908068  474795 cli_runner.go:164] Run: docker container inspect old-k8s-version-610853 --format={{.State.Status}}
	I1025 10:30:29.906818  474795 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-610853"
	I1025 10:30:29.908517  474795 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-610853"
	I1025 10:30:29.908788  474795 cli_runner.go:164] Run: docker container inspect old-k8s-version-610853 --format={{.State.Status}}
	I1025 10:30:29.912762  474795 out.go:179] * Verifying Kubernetes components...
	I1025 10:30:29.920676  474795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:30:29.956127  474795 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-610853"
	W1025 10:30:29.956156  474795 addons.go:247] addon default-storageclass should already be in state true
	I1025 10:30:29.956184  474795 host.go:66] Checking if "old-k8s-version-610853" exists ...
	I1025 10:30:29.956603  474795 cli_runner.go:164] Run: docker container inspect old-k8s-version-610853 --format={{.State.Status}}
	I1025 10:30:29.974006  474795 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 10:30:29.976925  474795 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1025 10:30:29.979851  474795 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 10:30:29.979875  474795 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 10:30:29.979951  474795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610853
	I1025 10:30:30.002057  474795 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:30:30.006968  474795 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:30:30.006995  474795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:30:30.007072  474795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610853
	I1025 10:30:30.024205  474795 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:30:30.024231  474795 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:30:30.024306  474795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610853
	I1025 10:30:30.067088  474795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33427 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/old-k8s-version-610853/id_rsa Username:docker}
	I1025 10:30:30.084610  474795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33427 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/old-k8s-version-610853/id_rsa Username:docker}
	I1025 10:30:30.087478  474795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33427 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/old-k8s-version-610853/id_rsa Username:docker}
	I1025 10:30:30.288062  474795 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:30:30.317822  474795 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-610853" to be "Ready" ...
	I1025 10:30:30.346705  474795 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 10:30:30.346776  474795 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 10:30:30.381407  474795 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 10:30:30.381494  474795 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 10:30:30.399727  474795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:30:30.405253  474795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:30:30.424080  474795 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 10:30:30.424164  474795 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 10:30:30.527727  474795 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 10:30:30.527809  474795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 10:30:30.556475  474795 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 10:30:30.556549  474795 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 10:30:30.591305  474795 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 10:30:30.591380  474795 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 10:30:30.659906  474795 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 10:30:30.659979  474795 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 10:30:30.718382  474795 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 10:30:30.718454  474795 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 10:30:30.748566  474795 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:30:30.748640  474795 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 10:30:30.771569  474795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:30:34.313514  474795 node_ready.go:49] node "old-k8s-version-610853" is "Ready"
	I1025 10:30:34.313584  474795 node_ready.go:38] duration metric: took 3.995682892s for node "old-k8s-version-610853" to be "Ready" ...
	I1025 10:30:34.313614  474795 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:30:34.313700  474795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:30:34.987066  474795 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.587244642s)
	I1025 10:30:35.767395  474795 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.362061141s)
	I1025 10:30:36.536896  474795 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.765232428s)
	I1025 10:30:36.537200  474795 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.223466362s)
	I1025 10:30:36.537220  474795 api_server.go:72] duration metric: took 6.631225534s to wait for apiserver process to appear ...
	I1025 10:30:36.537226  474795 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:30:36.537242  474795 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 10:30:36.540151  474795 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-610853 addons enable metrics-server
	
	I1025 10:30:36.543238  474795 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1025 10:30:36.546997  474795 addons.go:514] duration metric: took 6.640573713s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1025 10:30:36.549348  474795 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1025 10:30:36.550837  474795 api_server.go:141] control plane version: v1.28.0
	I1025 10:30:36.550859  474795 api_server.go:131] duration metric: took 13.627495ms to wait for apiserver health ...
	I1025 10:30:36.550868  474795 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:30:36.556216  474795 system_pods.go:59] 8 kube-system pods found
	I1025 10:30:36.556307  474795 system_pods.go:61] "coredns-5dd5756b68-mp4xx" [339b3875-9aea-4d9d-bd92-87082f232a5e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:30:36.556350  474795 system_pods.go:61] "etcd-old-k8s-version-610853" [dae3daf5-67f7-4923-a88e-d0c16e57bb45] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:30:36.556380  474795 system_pods.go:61] "kindnet-vgctp" [6092762b-d84b-4455-aac9-b17e1c0b90e6] Running
	I1025 10:30:36.556405  474795 system_pods.go:61] "kube-apiserver-old-k8s-version-610853" [8021fc38-9ddb-4c3f-b620-c0e814e3b933] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:30:36.556428  474795 system_pods.go:61] "kube-controller-manager-old-k8s-version-610853" [c59c26ce-37b0-4bc8-bb8d-1f8cebc32435] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:30:36.556464  474795 system_pods.go:61] "kube-proxy-pvxrq" [ea082c00-6806-45fc-96a0-de6cbe2b9afd] Running
	I1025 10:30:36.556495  474795 system_pods.go:61] "kube-scheduler-old-k8s-version-610853" [d566f4ab-65a2-4313-84a7-201466e94cc1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:30:36.556517  474795 system_pods.go:61] "storage-provisioner" [7f2741b4-bcad-4266-9634-4b2aee05a1d7] Running
	I1025 10:30:36.556542  474795 system_pods.go:74] duration metric: took 5.667617ms to wait for pod list to return data ...
	I1025 10:30:36.556576  474795 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:30:36.563391  474795 default_sa.go:45] found service account: "default"
	I1025 10:30:36.563416  474795 default_sa.go:55] duration metric: took 6.810293ms for default service account to be created ...
	I1025 10:30:36.563426  474795 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:30:36.566948  474795 system_pods.go:86] 8 kube-system pods found
	I1025 10:30:36.566976  474795 system_pods.go:89] "coredns-5dd5756b68-mp4xx" [339b3875-9aea-4d9d-bd92-87082f232a5e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:30:36.566986  474795 system_pods.go:89] "etcd-old-k8s-version-610853" [dae3daf5-67f7-4923-a88e-d0c16e57bb45] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:30:36.566992  474795 system_pods.go:89] "kindnet-vgctp" [6092762b-d84b-4455-aac9-b17e1c0b90e6] Running
	I1025 10:30:36.566999  474795 system_pods.go:89] "kube-apiserver-old-k8s-version-610853" [8021fc38-9ddb-4c3f-b620-c0e814e3b933] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:30:36.567015  474795 system_pods.go:89] "kube-controller-manager-old-k8s-version-610853" [c59c26ce-37b0-4bc8-bb8d-1f8cebc32435] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:30:36.567026  474795 system_pods.go:89] "kube-proxy-pvxrq" [ea082c00-6806-45fc-96a0-de6cbe2b9afd] Running
	I1025 10:30:36.567033  474795 system_pods.go:89] "kube-scheduler-old-k8s-version-610853" [d566f4ab-65a2-4313-84a7-201466e94cc1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:30:36.567037  474795 system_pods.go:89] "storage-provisioner" [7f2741b4-bcad-4266-9634-4b2aee05a1d7] Running
	I1025 10:30:36.567044  474795 system_pods.go:126] duration metric: took 3.612288ms to wait for k8s-apps to be running ...
	I1025 10:30:36.567052  474795 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:30:36.567107  474795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:30:36.582620  474795 system_svc.go:56] duration metric: took 15.559048ms WaitForService to wait for kubelet
	I1025 10:30:36.582644  474795 kubeadm.go:586] duration metric: took 6.676648484s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:30:36.582662  474795 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:30:36.585810  474795 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:30:36.585836  474795 node_conditions.go:123] node cpu capacity is 2
	I1025 10:30:36.585847  474795 node_conditions.go:105] duration metric: took 3.180242ms to run NodePressure ...
	I1025 10:30:36.585861  474795 start.go:241] waiting for startup goroutines ...
	I1025 10:30:36.585868  474795 start.go:246] waiting for cluster config update ...
	I1025 10:30:36.585878  474795 start.go:255] writing updated cluster config ...
	I1025 10:30:36.586141  474795 ssh_runner.go:195] Run: rm -f paused
	I1025 10:30:36.589697  474795 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:30:36.594316  474795 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-mp4xx" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 10:30:38.600266  474795 pod_ready.go:104] pod "coredns-5dd5756b68-mp4xx" is not "Ready", error: <nil>
	W1025 10:30:40.600441  474795 pod_ready.go:104] pod "coredns-5dd5756b68-mp4xx" is not "Ready", error: <nil>
	W1025 10:30:43.100748  474795 pod_ready.go:104] pod "coredns-5dd5756b68-mp4xx" is not "Ready", error: <nil>
	W1025 10:30:45.102117  474795 pod_ready.go:104] pod "coredns-5dd5756b68-mp4xx" is not "Ready", error: <nil>
	W1025 10:30:47.600659  474795 pod_ready.go:104] pod "coredns-5dd5756b68-mp4xx" is not "Ready", error: <nil>
	W1025 10:30:49.601212  474795 pod_ready.go:104] pod "coredns-5dd5756b68-mp4xx" is not "Ready", error: <nil>
	W1025 10:30:51.610200  474795 pod_ready.go:104] pod "coredns-5dd5756b68-mp4xx" is not "Ready", error: <nil>
	W1025 10:30:54.103975  474795 pod_ready.go:104] pod "coredns-5dd5756b68-mp4xx" is not "Ready", error: <nil>
	W1025 10:30:56.600286  474795 pod_ready.go:104] pod "coredns-5dd5756b68-mp4xx" is not "Ready", error: <nil>
	W1025 10:30:59.099726  474795 pod_ready.go:104] pod "coredns-5dd5756b68-mp4xx" is not "Ready", error: <nil>
	W1025 10:31:01.101034  474795 pod_ready.go:104] pod "coredns-5dd5756b68-mp4xx" is not "Ready", error: <nil>
	W1025 10:31:03.600285  474795 pod_ready.go:104] pod "coredns-5dd5756b68-mp4xx" is not "Ready", error: <nil>
	W1025 10:31:06.101781  474795 pod_ready.go:104] pod "coredns-5dd5756b68-mp4xx" is not "Ready", error: <nil>
	W1025 10:31:08.599772  474795 pod_ready.go:104] pod "coredns-5dd5756b68-mp4xx" is not "Ready", error: <nil>
	W1025 10:31:10.600013  474795 pod_ready.go:104] pod "coredns-5dd5756b68-mp4xx" is not "Ready", error: <nil>
	I1025 10:31:11.600400  474795 pod_ready.go:94] pod "coredns-5dd5756b68-mp4xx" is "Ready"
	I1025 10:31:11.600425  474795 pod_ready.go:86] duration metric: took 35.006080498s for pod "coredns-5dd5756b68-mp4xx" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:31:11.603376  474795 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-610853" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:31:11.608287  474795 pod_ready.go:94] pod "etcd-old-k8s-version-610853" is "Ready"
	I1025 10:31:11.608315  474795 pod_ready.go:86] duration metric: took 4.910043ms for pod "etcd-old-k8s-version-610853" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:31:11.611454  474795 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-610853" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:31:11.616451  474795 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-610853" is "Ready"
	I1025 10:31:11.616475  474795 pod_ready.go:86] duration metric: took 4.998225ms for pod "kube-apiserver-old-k8s-version-610853" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:31:11.619688  474795 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-610853" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:31:11.798805  474795 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-610853" is "Ready"
	I1025 10:31:11.798831  474795 pod_ready.go:86] duration metric: took 179.116757ms for pod "kube-controller-manager-old-k8s-version-610853" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:31:11.999917  474795 pod_ready.go:83] waiting for pod "kube-proxy-pvxrq" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:31:12.398999  474795 pod_ready.go:94] pod "kube-proxy-pvxrq" is "Ready"
	I1025 10:31:12.399029  474795 pod_ready.go:86] duration metric: took 399.08217ms for pod "kube-proxy-pvxrq" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:31:12.599608  474795 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-610853" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:31:12.999115  474795 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-610853" is "Ready"
	I1025 10:31:12.999177  474795 pod_ready.go:86] duration metric: took 399.510171ms for pod "kube-scheduler-old-k8s-version-610853" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:31:12.999228  474795 pod_ready.go:40] duration metric: took 36.409491139s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:31:13.055214  474795 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1025 10:31:13.058157  474795 out.go:203] 
	W1025 10:31:13.060992  474795 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1025 10:31:13.063856  474795 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1025 10:31:13.067316  474795 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-610853" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 10:31:07 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:07.657644887Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:31:07 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:07.664637812Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:31:07 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:07.665139406Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:31:07 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:07.682910658Z" level=info msg="Created container ffc56b496e33c1699567d30156945a2d74c4206c408d2790f67085f3a97bf118: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vpwjv/dashboard-metrics-scraper" id=b46a940b-47aa-4a0d-97ba-4294a7784139 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:31:07 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:07.683677257Z" level=info msg="Starting container: ffc56b496e33c1699567d30156945a2d74c4206c408d2790f67085f3a97bf118" id=59b67ddd-8c3d-409d-a657-2b8083d3233d name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:31:07 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:07.685524386Z" level=info msg="Started container" PID=1629 containerID=ffc56b496e33c1699567d30156945a2d74c4206c408d2790f67085f3a97bf118 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vpwjv/dashboard-metrics-scraper id=59b67ddd-8c3d-409d-a657-2b8083d3233d name=/runtime.v1.RuntimeService/StartContainer sandboxID=4e81a197fafb393954120ac05b22a3283e72f859896cbc2b96b762057d9facfe
	Oct 25 10:31:07 old-k8s-version-610853 conmon[1627]: conmon ffc56b496e33c1699567 <ninfo>: container 1629 exited with status 1
	Oct 25 10:31:08 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:08.475922574Z" level=info msg="Removing container: ee23cd9eac7ce911bdc2f280666e2763a7792427c0fefe4c524a057117ed054e" id=0d1d998d-d484-4d99-b31f-94265edd0c89 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:31:08 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:08.488950669Z" level=info msg="Error loading conmon cgroup of container ee23cd9eac7ce911bdc2f280666e2763a7792427c0fefe4c524a057117ed054e: cgroup deleted" id=0d1d998d-d484-4d99-b31f-94265edd0c89 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:31:08 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:08.492188724Z" level=info msg="Removed container ee23cd9eac7ce911bdc2f280666e2763a7792427c0fefe4c524a057117ed054e: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vpwjv/dashboard-metrics-scraper" id=0d1d998d-d484-4d99-b31f-94265edd0c89 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:31:16 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:16.291871064Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:31:16 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:16.296059212Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:31:16 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:16.29611271Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:31:16 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:16.296136571Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:31:16 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:16.299223838Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:31:16 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:16.299258407Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:31:16 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:16.299279347Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:31:16 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:16.302393273Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:31:16 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:16.302433036Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:31:16 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:16.302456757Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:31:16 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:16.305540538Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:31:16 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:16.305578741Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:31:16 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:16.305605047Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:31:16 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:16.308874602Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:31:16 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:16.308906947Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	ffc56b496e33c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago      Exited              dashboard-metrics-scraper   2                   4e81a197fafb3       dashboard-metrics-scraper-5f989dc9cf-vpwjv       kubernetes-dashboard
	091e16d9863a0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           21 seconds ago      Running             storage-provisioner         2                   b463a1bcf1d99       storage-provisioner                              kube-system
	830e333170d03       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   35 seconds ago      Running             kubernetes-dashboard        0                   ebc0291f419de       kubernetes-dashboard-8694d4445c-r2j5g            kubernetes-dashboard
	94efb18a2e973       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago      Running             busybox                     1                   f2d5af5e2ae34       busybox                                          default
	b4a063cf1b815       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           52 seconds ago      Running             coredns                     1                   d574af27702da       coredns-5dd5756b68-mp4xx                         kube-system
	2fff77ac5e928       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago      Running             kindnet-cni                 1                   8065ed89825d3       kindnet-vgctp                                    kube-system
	727f02ebb4a44       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           52 seconds ago      Running             kube-proxy                  1                   e2b07b31c3bc9       kube-proxy-pvxrq                                 kube-system
	ac4d66f1ea5f4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           52 seconds ago      Exited              storage-provisioner         1                   b463a1bcf1d99       storage-provisioner                              kube-system
	4e548687fb61e       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           58 seconds ago      Running             etcd                        1                   651aa5a157573       etcd-old-k8s-version-610853                      kube-system
	ce552c2cb6e4e       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           58 seconds ago      Running             kube-apiserver              1                   346eb58d17ab8       kube-apiserver-old-k8s-version-610853            kube-system
	de79fc3d299d3       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           58 seconds ago      Running             kube-scheduler              1                   8c760185afd8b       kube-scheduler-old-k8s-version-610853            kube-system
	01ef458479a2b       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           58 seconds ago      Running             kube-controller-manager     1                   1e2fcaa7151dd       kube-controller-manager-old-k8s-version-610853   kube-system
	
	
	==> coredns [b4a063cf1b81586de3620a2e35b3fb766dfd73a20da17e5cb8ba258e8c2b2cfe] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60684 - 37659 "HINFO IN 4653879309914932737.8624811222351995741. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.034934892s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-610853
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-610853
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=old-k8s-version-610853
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_29_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:29:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-610853
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:31:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:31:05 +0000   Sat, 25 Oct 2025 10:29:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:31:05 +0000   Sat, 25 Oct 2025 10:29:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:31:05 +0000   Sat, 25 Oct 2025 10:29:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:31:05 +0000   Sat, 25 Oct 2025 10:29:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-610853
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                16c3fb75-2c85-4847-b008-4bbd6334ab71
	  Boot ID:                    3729fb0b-b441-4078-a4f7-ae0fb40e9fa4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-5dd5756b68-mp4xx                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     108s
	  kube-system                 etcd-old-k8s-version-610853                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m3s
	  kube-system                 kindnet-vgctp                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-old-k8s-version-610853             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-old-k8s-version-610853    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-pvxrq                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-old-k8s-version-610853             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-vpwjv        0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-r2j5g             0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 107s                 kube-proxy       
	  Normal  Starting                 52s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m9s (x8 over 2m9s)  kubelet          Node old-k8s-version-610853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m9s (x8 over 2m9s)  kubelet          Node old-k8s-version-610853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m9s (x8 over 2m9s)  kubelet          Node old-k8s-version-610853 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m1s                 kubelet          Node old-k8s-version-610853 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m1s                 kubelet          Node old-k8s-version-610853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s                 kubelet          Node old-k8s-version-610853 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m1s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s                 node-controller  Node old-k8s-version-610853 event: Registered Node old-k8s-version-610853 in Controller
	  Normal  NodeReady                94s                  kubelet          Node old-k8s-version-610853 status is now: NodeReady
	  Normal  Starting                 59s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)    kubelet          Node old-k8s-version-610853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)    kubelet          Node old-k8s-version-610853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)    kubelet          Node old-k8s-version-610853 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           42s                  node-controller  Node old-k8s-version-610853 event: Registered Node old-k8s-version-610853 in Controller
	
	
	==> dmesg <==
	[Oct25 10:04] overlayfs: idmapped layers are currently not supported
	[Oct25 10:09] overlayfs: idmapped layers are currently not supported
	[ +31.156008] overlayfs: idmapped layers are currently not supported
	[Oct25 10:11] overlayfs: idmapped layers are currently not supported
	[Oct25 10:12] overlayfs: idmapped layers are currently not supported
	[Oct25 10:13] overlayfs: idmapped layers are currently not supported
	[Oct25 10:14] overlayfs: idmapped layers are currently not supported
	[Oct25 10:15] overlayfs: idmapped layers are currently not supported
	[ +16.057450] overlayfs: idmapped layers are currently not supported
	[Oct25 10:16] overlayfs: idmapped layers are currently not supported
	[ +24.917476] overlayfs: idmapped layers are currently not supported
	[Oct25 10:17] overlayfs: idmapped layers are currently not supported
	[ +27.290615] overlayfs: idmapped layers are currently not supported
	[Oct25 10:19] overlayfs: idmapped layers are currently not supported
	[Oct25 10:20] overlayfs: idmapped layers are currently not supported
	[Oct25 10:21] overlayfs: idmapped layers are currently not supported
	[Oct25 10:22] overlayfs: idmapped layers are currently not supported
	[ +31.000692] overlayfs: idmapped layers are currently not supported
	[Oct25 10:24] overlayfs: idmapped layers are currently not supported
	[Oct25 10:26] overlayfs: idmapped layers are currently not supported
	[Oct25 10:27] overlayfs: idmapped layers are currently not supported
	[Oct25 10:28] overlayfs: idmapped layers are currently not supported
	[ +15.077774] overlayfs: idmapped layers are currently not supported
	[Oct25 10:29] overlayfs: idmapped layers are currently not supported
	[Oct25 10:30] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [4e548687fb61e724d0492eca4c4b6af8ea0790732c7f2dbc6dd4670e9ee4e668] <==
	{"level":"info","ts":"2025-10-25T10:30:30.549665Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-25T10:30:30.549675Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-25T10:30:30.549871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-25T10:30:30.549926Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-25T10:30:30.549994Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T10:30:30.550019Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T10:30:30.553981Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-25T10:30:30.554168Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-25T10:30:30.55419Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-25T10:30:30.554308Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-25T10:30:30.554316Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-25T10:30:31.667204Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-25T10:30:31.667317Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-25T10:30:31.667373Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-25T10:30:31.667413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-25T10:30:31.667447Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-25T10:30:31.667484Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-25T10:30:31.667515Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-25T10:30:31.671083Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-610853 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-25T10:30:31.671194Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-25T10:30:31.675702Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-25T10:30:31.67121Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-25T10:30:31.6767Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-25T10:30:31.679221Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-25T10:30:31.679289Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 10:31:28 up  2:13,  0 user,  load average: 1.55, 3.15, 2.91
	Linux old-k8s-version-610853 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2fff77ac5e928293117962aeb11abaa056b2eae73a468ecf54a7ac63f46f3a60] <==
	I1025 10:30:36.078372       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:30:36.078617       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 10:30:36.078756       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:30:36.078775       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:30:36.078788       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:30:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:30:36.290693       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:30:36.290766       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:30:36.290832       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:30:36.292298       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 10:31:06.290850       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 10:31:06.290850       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 10:31:06.292073       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 10:31:06.293192       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1025 10:31:07.690989       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:31:07.691033       1 metrics.go:72] Registering metrics
	I1025 10:31:07.691104       1 controller.go:711] "Syncing nftables rules"
	I1025 10:31:16.290827       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:31:16.290876       1 main.go:301] handling current node
	I1025 10:31:26.295287       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:31:26.295319       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ce552c2cb6e4e7361de884db9ef88fd97d4affae078257f79a846fb8bf14e468] <==
	I1025 10:30:34.261053       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1025 10:30:34.270313       1 aggregator.go:166] initial CRD sync complete...
	I1025 10:30:34.271548       1 autoregister_controller.go:141] Starting autoregister controller
	I1025 10:30:34.271612       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 10:30:34.292221       1 shared_informer.go:318] Caches are synced for configmaps
	I1025 10:30:34.292739       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 10:30:34.337565       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:30:34.384895       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:30:34.392201       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1025 10:30:34.392900       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1025 10:30:34.399352       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1025 10:30:34.400037       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1025 10:30:34.400058       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	E1025 10:30:34.408015       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 10:30:35.108502       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:30:36.245248       1 controller.go:624] quota admission added evaluator for: namespaces
	I1025 10:30:36.297110       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1025 10:30:36.336244       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:30:36.374746       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:30:36.403675       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1025 10:30:36.505020       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.13.37"}
	I1025 10:30:36.527551       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.102.223"}
	I1025 10:30:47.111257       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1025 10:30:47.160898       1 controller.go:624] quota admission added evaluator for: endpoints
	I1025 10:30:47.373761       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [01ef458479a2b900515d032a9c6b16080bf3eecf88bec56c1db80e3b57c927a1] <==
	I1025 10:30:47.319790       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-r2j5g"
	I1025 10:30:47.327514       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="455.01016ms"
	I1025 10:30:47.327614       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-vpwjv"
	I1025 10:30:47.327792       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="171.3µs"
	I1025 10:30:47.342015       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="222.607828ms"
	I1025 10:30:47.343715       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="227.439463ms"
	I1025 10:30:47.367876       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="23.005977ms"
	I1025 10:30:47.370314       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="74.652µs"
	I1025 10:30:47.370860       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="42.913µs"
	I1025 10:30:47.370774       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 10:30:47.370962       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1025 10:30:47.398018       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	I1025 10:30:47.398412       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="56.274263ms"
	I1025 10:30:47.401851       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 10:30:47.416458       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="17.896514ms"
	I1025 10:30:47.416555       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.071µs"
	I1025 10:30:52.449827       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="11.291556ms"
	I1025 10:30:52.450090       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="63.484µs"
	I1025 10:30:57.451357       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="66.766µs"
	I1025 10:30:58.460104       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="62.106µs"
	I1025 10:30:59.463005       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="75.242µs"
	I1025 10:31:08.508130       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="57.65µs"
	I1025 10:31:11.196474       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.788046ms"
	I1025 10:31:11.197607       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="54.794µs"
	I1025 10:31:17.672041       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="57.699µs"
	
	
	==> kube-proxy [727f02ebb4a44e8230fd46ee0e62a5d410cc1ab651fb405b54edd36cb5b76a9b] <==
	I1025 10:30:36.092068       1 server_others.go:69] "Using iptables proxy"
	I1025 10:30:36.111814       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1025 10:30:36.180557       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:30:36.195667       1 server_others.go:152] "Using iptables Proxier"
	I1025 10:30:36.195771       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1025 10:30:36.195804       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1025 10:30:36.195858       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1025 10:30:36.196151       1 server.go:846] "Version info" version="v1.28.0"
	I1025 10:30:36.196202       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:30:36.198611       1 config.go:188] "Starting service config controller"
	I1025 10:30:36.198633       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1025 10:30:36.198650       1 config.go:97] "Starting endpoint slice config controller"
	I1025 10:30:36.198654       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1025 10:30:36.198979       1 config.go:315] "Starting node config controller"
	I1025 10:30:36.198994       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1025 10:30:36.299350       1 shared_informer.go:318] Caches are synced for node config
	I1025 10:30:36.299389       1 shared_informer.go:318] Caches are synced for service config
	I1025 10:30:36.299431       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [de79fc3d299d345572eba9b73c7727595ed89922ab269e444d5396b944bf1644] <==
	W1025 10:30:34.303092       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1025 10:30:34.303126       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1025 10:30:34.303136       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1025 10:30:34.303199       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1025 10:30:34.303257       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1025 10:30:34.303293       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1025 10:30:34.303375       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1025 10:30:34.303410       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1025 10:30:34.303478       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1025 10:30:34.303512       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1025 10:30:34.303577       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1025 10:30:34.303612       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1025 10:30:34.303694       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1025 10:30:34.303986       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1025 10:30:34.303298       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1025 10:30:34.304112       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1025 10:30:34.304072       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1025 10:30:34.304177       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1025 10:30:34.304210       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1025 10:30:34.304238       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1025 10:30:34.304313       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1025 10:30:34.304333       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1025 10:30:34.304384       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1025 10:30:34.304421       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1025 10:30:35.874545       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 25 10:30:41 old-k8s-version-610853 kubelet[778]: I1025 10:30:41.168670     778 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 25 10:30:47 old-k8s-version-610853 kubelet[778]: I1025 10:30:47.325869     778 topology_manager.go:215] "Topology Admit Handler" podUID="b64430c6-825f-484b-9d66-8eb521ff792f" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-r2j5g"
	Oct 25 10:30:47 old-k8s-version-610853 kubelet[778]: I1025 10:30:47.353093     778 topology_manager.go:215] "Topology Admit Handler" podUID="ff8834ac-0c12-4190-83d6-bf94a1049287" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-vpwjv"
	Oct 25 10:30:47 old-k8s-version-610853 kubelet[778]: I1025 10:30:47.401156     778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jv22\" (UniqueName: \"kubernetes.io/projected/b64430c6-825f-484b-9d66-8eb521ff792f-kube-api-access-4jv22\") pod \"kubernetes-dashboard-8694d4445c-r2j5g\" (UID: \"b64430c6-825f-484b-9d66-8eb521ff792f\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-r2j5g"
	Oct 25 10:30:47 old-k8s-version-610853 kubelet[778]: I1025 10:30:47.401358     778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b64430c6-825f-484b-9d66-8eb521ff792f-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-r2j5g\" (UID: \"b64430c6-825f-484b-9d66-8eb521ff792f\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-r2j5g"
	Oct 25 10:30:47 old-k8s-version-610853 kubelet[778]: I1025 10:30:47.502379     778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6j4m\" (UniqueName: \"kubernetes.io/projected/ff8834ac-0c12-4190-83d6-bf94a1049287-kube-api-access-m6j4m\") pod \"dashboard-metrics-scraper-5f989dc9cf-vpwjv\" (UID: \"ff8834ac-0c12-4190-83d6-bf94a1049287\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vpwjv"
	Oct 25 10:30:47 old-k8s-version-610853 kubelet[778]: I1025 10:30:47.502596     778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ff8834ac-0c12-4190-83d6-bf94a1049287-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-vpwjv\" (UID: \"ff8834ac-0c12-4190-83d6-bf94a1049287\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vpwjv"
	Oct 25 10:30:47 old-k8s-version-610853 kubelet[778]: W1025 10:30:47.675578     778 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/d9ac8e10f5b1ea2965bab805d608fa83def6ed75bd1273d0c136ee442b0b45b2/crio-4e81a197fafb393954120ac05b22a3283e72f859896cbc2b96b762057d9facfe WatchSource:0}: Error finding container 4e81a197fafb393954120ac05b22a3283e72f859896cbc2b96b762057d9facfe: Status 404 returned error can't find the container with id 4e81a197fafb393954120ac05b22a3283e72f859896cbc2b96b762057d9facfe
	Oct 25 10:30:57 old-k8s-version-610853 kubelet[778]: I1025 10:30:57.430636     778 scope.go:117] "RemoveContainer" containerID="06feaf51a843fee5e053ff4ac724eaa39ea49aee6810dd9f7b44fd453a4e4e20"
	Oct 25 10:30:57 old-k8s-version-610853 kubelet[778]: I1025 10:30:57.450651     778 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-r2j5g" podStartSLOduration=5.842347327 podCreationTimestamp="2025-10-25 10:30:47 +0000 UTC" firstStartedPulling="2025-10-25 10:30:47.655088664 +0000 UTC m=+18.562252866" lastFinishedPulling="2025-10-25 10:30:52.26260767 +0000 UTC m=+23.169771872" observedRunningTime="2025-10-25 10:30:52.437106737 +0000 UTC m=+23.344270939" watchObservedRunningTime="2025-10-25 10:30:57.449866333 +0000 UTC m=+28.357030527"
	Oct 25 10:30:58 old-k8s-version-610853 kubelet[778]: I1025 10:30:58.441334     778 scope.go:117] "RemoveContainer" containerID="ee23cd9eac7ce911bdc2f280666e2763a7792427c0fefe4c524a057117ed054e"
	Oct 25 10:30:58 old-k8s-version-610853 kubelet[778]: I1025 10:30:58.441856     778 scope.go:117] "RemoveContainer" containerID="06feaf51a843fee5e053ff4ac724eaa39ea49aee6810dd9f7b44fd453a4e4e20"
	Oct 25 10:30:58 old-k8s-version-610853 kubelet[778]: E1025 10:30:58.442281     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vpwjv_kubernetes-dashboard(ff8834ac-0c12-4190-83d6-bf94a1049287)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vpwjv" podUID="ff8834ac-0c12-4190-83d6-bf94a1049287"
	Oct 25 10:30:59 old-k8s-version-610853 kubelet[778]: I1025 10:30:59.449288     778 scope.go:117] "RemoveContainer" containerID="ee23cd9eac7ce911bdc2f280666e2763a7792427c0fefe4c524a057117ed054e"
	Oct 25 10:30:59 old-k8s-version-610853 kubelet[778]: E1025 10:30:59.449629     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vpwjv_kubernetes-dashboard(ff8834ac-0c12-4190-83d6-bf94a1049287)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vpwjv" podUID="ff8834ac-0c12-4190-83d6-bf94a1049287"
	Oct 25 10:31:06 old-k8s-version-610853 kubelet[778]: I1025 10:31:06.465310     778 scope.go:117] "RemoveContainer" containerID="ac4d66f1ea5f4c3937ad11f2c045143cdcc911646adefc4dc242e213d38acdda"
	Oct 25 10:31:07 old-k8s-version-610853 kubelet[778]: I1025 10:31:07.654655     778 scope.go:117] "RemoveContainer" containerID="ee23cd9eac7ce911bdc2f280666e2763a7792427c0fefe4c524a057117ed054e"
	Oct 25 10:31:08 old-k8s-version-610853 kubelet[778]: I1025 10:31:08.473701     778 scope.go:117] "RemoveContainer" containerID="ee23cd9eac7ce911bdc2f280666e2763a7792427c0fefe4c524a057117ed054e"
	Oct 25 10:31:08 old-k8s-version-610853 kubelet[778]: I1025 10:31:08.474006     778 scope.go:117] "RemoveContainer" containerID="ffc56b496e33c1699567d30156945a2d74c4206c408d2790f67085f3a97bf118"
	Oct 25 10:31:08 old-k8s-version-610853 kubelet[778]: E1025 10:31:08.474331     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vpwjv_kubernetes-dashboard(ff8834ac-0c12-4190-83d6-bf94a1049287)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vpwjv" podUID="ff8834ac-0c12-4190-83d6-bf94a1049287"
	Oct 25 10:31:17 old-k8s-version-610853 kubelet[778]: I1025 10:31:17.655145     778 scope.go:117] "RemoveContainer" containerID="ffc56b496e33c1699567d30156945a2d74c4206c408d2790f67085f3a97bf118"
	Oct 25 10:31:17 old-k8s-version-610853 kubelet[778]: E1025 10:31:17.655493     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vpwjv_kubernetes-dashboard(ff8834ac-0c12-4190-83d6-bf94a1049287)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vpwjv" podUID="ff8834ac-0c12-4190-83d6-bf94a1049287"
	Oct 25 10:31:25 old-k8s-version-610853 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 10:31:25 old-k8s-version-610853 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 10:31:25 old-k8s-version-610853 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [830e333170d03998870a50498589369cec9f3aec50bea277636833a4af430c9d] <==
	2025/10/25 10:30:52 Using namespace: kubernetes-dashboard
	2025/10/25 10:30:52 Using in-cluster config to connect to apiserver
	2025/10/25 10:30:52 Using secret token for csrf signing
	2025/10/25 10:30:52 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 10:30:52 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 10:30:52 Successful initial request to the apiserver, version: v1.28.0
	2025/10/25 10:30:52 Generating JWE encryption key
	2025/10/25 10:30:52 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 10:30:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 10:30:53 Initializing JWE encryption key from synchronized object
	2025/10/25 10:30:53 Creating in-cluster Sidecar client
	2025/10/25 10:30:53 Serving insecurely on HTTP port: 9090
	2025/10/25 10:30:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:31:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:30:52 Starting overwatch
	
	
	==> storage-provisioner [091e16d9863a0d528c1db558671f37699e8fef853fcec9f0ddb84719849a6993] <==
	I1025 10:31:06.529173       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 10:31:06.542499       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 10:31:06.543661       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1025 10:31:23.947538       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 10:31:23.947781       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-610853_8a3b1d8f-6580-4bbf-bdb0-9ccece0dd268!
	I1025 10:31:23.951643       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"24744037-24e8-4570-96d7-2db397f7e01e", APIVersion:"v1", ResourceVersion:"657", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-610853_8a3b1d8f-6580-4bbf-bdb0-9ccece0dd268 became leader
	I1025 10:31:24.050412       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-610853_8a3b1d8f-6580-4bbf-bdb0-9ccece0dd268!
	
	
	==> storage-provisioner [ac4d66f1ea5f4c3937ad11f2c045143cdcc911646adefc4dc242e213d38acdda] <==
	I1025 10:30:35.960087       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 10:31:05.962540       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-610853 -n old-k8s-version-610853
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-610853 -n old-k8s-version-610853: exit status 2 (381.667367ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-610853 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-610853
helpers_test.go:243: (dbg) docker inspect old-k8s-version-610853:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d9ac8e10f5b1ea2965bab805d608fa83def6ed75bd1273d0c136ee442b0b45b2",
	        "Created": "2025-10-25T10:28:59.57788081Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 474923,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:30:21.931816315Z",
	            "FinishedAt": "2025-10-25T10:30:21.119343628Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/d9ac8e10f5b1ea2965bab805d608fa83def6ed75bd1273d0c136ee442b0b45b2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d9ac8e10f5b1ea2965bab805d608fa83def6ed75bd1273d0c136ee442b0b45b2/hostname",
	        "HostsPath": "/var/lib/docker/containers/d9ac8e10f5b1ea2965bab805d608fa83def6ed75bd1273d0c136ee442b0b45b2/hosts",
	        "LogPath": "/var/lib/docker/containers/d9ac8e10f5b1ea2965bab805d608fa83def6ed75bd1273d0c136ee442b0b45b2/d9ac8e10f5b1ea2965bab805d608fa83def6ed75bd1273d0c136ee442b0b45b2-json.log",
	        "Name": "/old-k8s-version-610853",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-610853:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-610853",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d9ac8e10f5b1ea2965bab805d608fa83def6ed75bd1273d0c136ee442b0b45b2",
	                "LowerDir": "/var/lib/docker/overlay2/f26c3023df3151bbd59006b3509ec301ab9c593728a2fdf00fb4b7492c86c22e-init/diff:/var/lib/docker/overlay2/56244b4f60319ba406f5ebe406da3f45bd5764c167111669ae2d84794cf94518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f26c3023df3151bbd59006b3509ec301ab9c593728a2fdf00fb4b7492c86c22e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f26c3023df3151bbd59006b3509ec301ab9c593728a2fdf00fb4b7492c86c22e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f26c3023df3151bbd59006b3509ec301ab9c593728a2fdf00fb4b7492c86c22e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-610853",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-610853/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-610853",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-610853",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-610853",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "43f5489946cf1f2b9808af7448809b548f726326e33114f919d07bc836c3a181",
	            "SandboxKey": "/var/run/docker/netns/43f5489946cf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-610853": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:d9:c1:96:b0:03",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "81225534a6ecbdb108a21a8d61134e13e2b296f3c48ec26db1c8d60aa1908e7c",
	                    "EndpointID": "24cca6b8f150e3abdf20ff03ab17f1f764325c02274f9f29bcfc659fe0a84923",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-610853",
	                        "d9ac8e10f5b1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-610853 -n old-k8s-version-610853
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-610853 -n old-k8s-version-610853: exit status 2 (349.245452ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-610853 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-610853 logs -n 25: (1.383599227s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-821614 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-821614             │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-821614 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-821614             │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-821614 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-821614             │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-821614 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-821614             │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-821614 sudo containerd config dump                                                                                                                                                                                                  │ cilium-821614             │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-821614 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-821614             │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-821614 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-821614             │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-821614 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-821614             │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-821614 sudo crio config                                                                                                                                                                                                             │ cilium-821614             │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │                     │
	│ delete  │ -p cilium-821614                                                                                                                                                                                                                              │ cilium-821614             │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │ 25 Oct 25 10:27 UTC │
	│ start   │ -p force-systemd-env-068963 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-068963  │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │ 25 Oct 25 10:28 UTC │
	│ delete  │ -p kubernetes-upgrade-845331                                                                                                                                                                                                                  │ kubernetes-upgrade-845331 │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │ 25 Oct 25 10:28 UTC │
	│ start   │ -p cert-expiration-313068 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-313068    │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:28 UTC │
	│ delete  │ -p force-systemd-env-068963                                                                                                                                                                                                                   │ force-systemd-env-068963  │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:28 UTC │
	│ start   │ -p cert-options-506318 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-506318       │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:28 UTC │
	│ ssh     │ cert-options-506318 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-506318       │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:28 UTC │
	│ ssh     │ -p cert-options-506318 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-506318       │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:28 UTC │
	│ delete  │ -p cert-options-506318                                                                                                                                                                                                                        │ cert-options-506318       │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:28 UTC │
	│ start   │ -p old-k8s-version-610853 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-610853    │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:29 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-610853 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-610853    │ jenkins │ v1.37.0 │ 25 Oct 25 10:30 UTC │                     │
	│ stop    │ -p old-k8s-version-610853 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-610853    │ jenkins │ v1.37.0 │ 25 Oct 25 10:30 UTC │ 25 Oct 25 10:30 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-610853 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-610853    │ jenkins │ v1.37.0 │ 25 Oct 25 10:30 UTC │ 25 Oct 25 10:30 UTC │
	│ start   │ -p old-k8s-version-610853 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-610853    │ jenkins │ v1.37.0 │ 25 Oct 25 10:30 UTC │ 25 Oct 25 10:31 UTC │
	│ image   │ old-k8s-version-610853 image list --format=json                                                                                                                                                                                               │ old-k8s-version-610853    │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │ 25 Oct 25 10:31 UTC │
	│ pause   │ -p old-k8s-version-610853 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-610853    │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:30:21
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:30:21.667394  474795 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:30:21.667530  474795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:30:21.667541  474795 out.go:374] Setting ErrFile to fd 2...
	I1025 10:30:21.667546  474795 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:30:21.667820  474795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 10:30:21.668210  474795 out.go:368] Setting JSON to false
	I1025 10:30:21.669089  474795 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7972,"bootTime":1761380250,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1025 10:30:21.669152  474795 start.go:141] virtualization:  
	I1025 10:30:21.672307  474795 out.go:179] * [old-k8s-version-610853] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:30:21.676152  474795 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 10:30:21.676274  474795 notify.go:220] Checking for updates...
	I1025 10:30:21.682115  474795 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:30:21.685046  474795 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:30:21.687991  474795 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-292167/.minikube
	I1025 10:30:21.690880  474795 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:30:21.693796  474795 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:30:21.697237  474795 config.go:182] Loaded profile config "old-k8s-version-610853": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 10:30:21.700700  474795 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1025 10:30:21.703549  474795 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:30:21.727343  474795 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:30:21.727467  474795 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:30:21.783871  474795 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:30:21.775079126 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:30:21.783979  474795 docker.go:318] overlay module found
	I1025 10:30:21.787143  474795 out.go:179] * Using the docker driver based on existing profile
	I1025 10:30:21.790048  474795 start.go:305] selected driver: docker
	I1025 10:30:21.790068  474795 start.go:925] validating driver "docker" against &{Name:old-k8s-version-610853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-610853 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:30:21.790163  474795 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:30:21.790889  474795 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:30:21.848060  474795 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:30:21.839044432 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:30:21.848435  474795 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:30:21.848472  474795 cni.go:84] Creating CNI manager for ""
	I1025 10:30:21.848539  474795 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:30:21.848580  474795 start.go:349] cluster config:
	{Name:old-k8s-version-610853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-610853 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:30:21.851879  474795 out.go:179] * Starting "old-k8s-version-610853" primary control-plane node in "old-k8s-version-610853" cluster
	I1025 10:30:21.854577  474795 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:30:21.857455  474795 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:30:21.860135  474795 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 10:30:21.860196  474795 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1025 10:30:21.860209  474795 cache.go:58] Caching tarball of preloaded images
	I1025 10:30:21.860222  474795 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:30:21.860307  474795 preload.go:233] Found /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:30:21.860317  474795 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1025 10:30:21.860427  474795 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/config.json ...
	I1025 10:30:21.881018  474795 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:30:21.881043  474795 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:30:21.881057  474795 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:30:21.881082  474795 start.go:360] acquireMachinesLock for old-k8s-version-610853: {Name:mk4cf5d4a6d8178880fb3a10acdef15766144ca0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:30:21.881148  474795 start.go:364] duration metric: took 41.863µs to acquireMachinesLock for "old-k8s-version-610853"
	I1025 10:30:21.881173  474795 start.go:96] Skipping create...Using existing machine configuration
	I1025 10:30:21.881184  474795 fix.go:54] fixHost starting: 
	I1025 10:30:21.881474  474795 cli_runner.go:164] Run: docker container inspect old-k8s-version-610853 --format={{.State.Status}}
	I1025 10:30:21.897981  474795 fix.go:112] recreateIfNeeded on old-k8s-version-610853: state=Stopped err=<nil>
	W1025 10:30:21.898013  474795 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 10:30:21.901213  474795 out.go:252] * Restarting existing docker container for "old-k8s-version-610853" ...
	I1025 10:30:21.901311  474795 cli_runner.go:164] Run: docker start old-k8s-version-610853
	I1025 10:30:22.160439  474795 cli_runner.go:164] Run: docker container inspect old-k8s-version-610853 --format={{.State.Status}}
	I1025 10:30:22.185166  474795 kic.go:430] container "old-k8s-version-610853" state is running.
	I1025 10:30:22.185740  474795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-610853
	I1025 10:30:22.207694  474795 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/config.json ...
	I1025 10:30:22.207923  474795 machine.go:93] provisionDockerMachine start ...
	I1025 10:30:22.207986  474795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610853
	I1025 10:30:22.233454  474795 main.go:141] libmachine: Using SSH client type: native
	I1025 10:30:22.233781  474795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33427 <nil> <nil>}
	I1025 10:30:22.233792  474795 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:30:22.235830  474795 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1025 10:30:25.382689  474795 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-610853
	
	I1025 10:30:25.382818  474795 ubuntu.go:182] provisioning hostname "old-k8s-version-610853"
	I1025 10:30:25.382902  474795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610853
	I1025 10:30:25.399953  474795 main.go:141] libmachine: Using SSH client type: native
	I1025 10:30:25.400300  474795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33427 <nil> <nil>}
	I1025 10:30:25.400319  474795 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-610853 && echo "old-k8s-version-610853" | sudo tee /etc/hostname
	I1025 10:30:25.561928  474795 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-610853
	
	I1025 10:30:25.562044  474795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610853
	I1025 10:30:25.580685  474795 main.go:141] libmachine: Using SSH client type: native
	I1025 10:30:25.581013  474795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33427 <nil> <nil>}
	I1025 10:30:25.581038  474795 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-610853' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-610853/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-610853' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:30:25.731525  474795 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:30:25.731551  474795 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-292167/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-292167/.minikube}
	I1025 10:30:25.731573  474795 ubuntu.go:190] setting up certificates
	I1025 10:30:25.731582  474795 provision.go:84] configureAuth start
	I1025 10:30:25.731646  474795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-610853
	I1025 10:30:25.749805  474795 provision.go:143] copyHostCerts
	I1025 10:30:25.749872  474795 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem, removing ...
	I1025 10:30:25.749896  474795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem
	I1025 10:30:25.749974  474795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem (1123 bytes)
	I1025 10:30:25.750079  474795 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem, removing ...
	I1025 10:30:25.750129  474795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem
	I1025 10:30:25.750163  474795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem (1675 bytes)
	I1025 10:30:25.750221  474795 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem, removing ...
	I1025 10:30:25.750230  474795 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem
	I1025 10:30:25.750257  474795 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem (1082 bytes)
	I1025 10:30:25.750311  474795 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-610853 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-610853]
	I1025 10:30:26.555306  474795 provision.go:177] copyRemoteCerts
	I1025 10:30:26.555379  474795 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:30:26.555420  474795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610853
	I1025 10:30:26.573290  474795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33427 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/old-k8s-version-610853/id_rsa Username:docker}
	I1025 10:30:26.682920  474795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 10:30:26.700595  474795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1025 10:30:26.717422  474795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:30:26.734368  474795 provision.go:87] duration metric: took 1.002758613s to configureAuth
	I1025 10:30:26.734396  474795 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:30:26.734590  474795 config.go:182] Loaded profile config "old-k8s-version-610853": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 10:30:26.734695  474795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610853
	I1025 10:30:26.751794  474795 main.go:141] libmachine: Using SSH client type: native
	I1025 10:30:26.752128  474795 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33427 <nil> <nil>}
	I1025 10:30:26.752150  474795 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:30:27.077387  474795 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:30:27.077408  474795 machine.go:96] duration metric: took 4.869468675s to provisionDockerMachine
	I1025 10:30:27.077418  474795 start.go:293] postStartSetup for "old-k8s-version-610853" (driver="docker")
	I1025 10:30:27.077445  474795 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:30:27.077515  474795 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:30:27.077558  474795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610853
	I1025 10:30:27.098352  474795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33427 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/old-k8s-version-610853/id_rsa Username:docker}
	I1025 10:30:27.203605  474795 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:30:27.206853  474795 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:30:27.206880  474795 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:30:27.206891  474795 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/addons for local assets ...
	I1025 10:30:27.206945  474795 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/files for local assets ...
	I1025 10:30:27.207023  474795 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem -> 2940172.pem in /etc/ssl/certs
	I1025 10:30:27.207131  474795 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:30:27.214777  474795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 10:30:27.232510  474795 start.go:296] duration metric: took 155.075678ms for postStartSetup
	I1025 10:30:27.232602  474795 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:30:27.232642  474795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610853
	I1025 10:30:27.249916  474795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33427 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/old-k8s-version-610853/id_rsa Username:docker}
	I1025 10:30:27.352609  474795 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:30:27.357440  474795 fix.go:56] duration metric: took 5.47624872s for fixHost
	I1025 10:30:27.357466  474795 start.go:83] releasing machines lock for "old-k8s-version-610853", held for 5.476304303s
	I1025 10:30:27.357553  474795 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-610853
	I1025 10:30:27.377332  474795 ssh_runner.go:195] Run: cat /version.json
	I1025 10:30:27.377411  474795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610853
	I1025 10:30:27.377690  474795 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:30:27.377751  474795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610853
	I1025 10:30:27.395829  474795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33427 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/old-k8s-version-610853/id_rsa Username:docker}
	I1025 10:30:27.413582  474795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33427 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/old-k8s-version-610853/id_rsa Username:docker}
	I1025 10:30:27.507085  474795 ssh_runner.go:195] Run: systemctl --version
	I1025 10:30:27.598005  474795 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:30:27.640905  474795 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:30:27.645285  474795 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:30:27.645370  474795 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:30:27.653139  474795 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:30:27.653216  474795 start.go:495] detecting cgroup driver to use...
	I1025 10:30:27.653257  474795 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:30:27.653317  474795 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:30:27.668450  474795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:30:27.681676  474795 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:30:27.681771  474795 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:30:27.697565  474795 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:30:27.710039  474795 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:30:27.838088  474795 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:30:27.962956  474795 docker.go:234] disabling docker service ...
	I1025 10:30:27.963038  474795 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:30:27.978989  474795 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:30:27.992469  474795 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:30:28.118829  474795 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:30:28.235517  474795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:30:28.248294  474795 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:30:28.262434  474795 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1025 10:30:28.262519  474795 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:30:28.271985  474795 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:30:28.272055  474795 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:30:28.281277  474795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:30:28.290102  474795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:30:28.299016  474795 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:30:28.307399  474795 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:30:28.316249  474795 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:30:28.324867  474795 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:30:28.333853  474795 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:30:28.341389  474795 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:30:28.348892  474795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:30:28.480909  474795 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:30:28.619687  474795 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:30:28.619807  474795 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:30:28.624048  474795 start.go:563] Will wait 60s for crictl version
	I1025 10:30:28.624186  474795 ssh_runner.go:195] Run: which crictl
	I1025 10:30:28.628124  474795 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:30:28.656491  474795 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:30:28.656635  474795 ssh_runner.go:195] Run: crio --version
	I1025 10:30:28.689161  474795 ssh_runner.go:195] Run: crio --version
	I1025 10:30:28.723436  474795 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1025 10:30:28.726350  474795 cli_runner.go:164] Run: docker network inspect old-k8s-version-610853 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:30:28.742882  474795 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1025 10:30:28.746514  474795 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:30:28.756533  474795 kubeadm.go:883] updating cluster {Name:old-k8s-version-610853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-610853 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:30:28.756651  474795 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 10:30:28.756705  474795 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:30:28.794572  474795 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:30:28.794597  474795 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:30:28.794659  474795 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:30:28.823129  474795 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:30:28.823185  474795 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:30:28.823193  474795 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1025 10:30:28.823300  474795 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-610853 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-610853 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:30:28.823385  474795 ssh_runner.go:195] Run: crio config
	I1025 10:30:28.896000  474795 cni.go:84] Creating CNI manager for ""
	I1025 10:30:28.896024  474795 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:30:28.896067  474795 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:30:28.896113  474795 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-610853 NodeName:old-k8s-version-610853 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:30:28.896261  474795 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-610853"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:30:28.896333  474795 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1025 10:30:28.904167  474795 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:30:28.904253  474795 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:30:28.912007  474795 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1025 10:30:28.925192  474795 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:30:28.938665  474795 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1025 10:30:28.951617  474795 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:30:28.955071  474795 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:30:28.964538  474795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:30:29.074224  474795 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:30:29.089856  474795 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853 for IP: 192.168.85.2
	I1025 10:30:29.089926  474795 certs.go:195] generating shared ca certs ...
	I1025 10:30:29.089957  474795 certs.go:227] acquiring lock for ca certs: {Name:mk9438893ca511bbb3aa6154ee7b6f94b409696d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:30:29.090157  474795 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key
	I1025 10:30:29.090246  474795 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key
	I1025 10:30:29.090269  474795 certs.go:257] generating profile certs ...
	I1025 10:30:29.090407  474795 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/client.key
	I1025 10:30:29.090501  474795 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/apiserver.key.132f89be
	I1025 10:30:29.090576  474795 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/proxy-client.key
	I1025 10:30:29.090734  474795 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem (1338 bytes)
	W1025 10:30:29.090810  474795 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017_empty.pem, impossibly tiny 0 bytes
	I1025 10:30:29.090837  474795 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 10:30:29.090901  474795 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem (1082 bytes)
	I1025 10:30:29.090955  474795 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:30:29.091005  474795 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem (1675 bytes)
	I1025 10:30:29.091082  474795 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 10:30:29.091889  474795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:30:29.115836  474795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:30:29.133637  474795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:30:29.151405  474795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:30:29.170766  474795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1025 10:30:29.189296  474795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:30:29.207620  474795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:30:29.227302  474795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:30:29.247293  474795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem --> /usr/share/ca-certificates/294017.pem (1338 bytes)
	I1025 10:30:29.265777  474795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /usr/share/ca-certificates/2940172.pem (1708 bytes)
	I1025 10:30:29.290597  474795 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:30:29.320376  474795 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:30:29.333907  474795 ssh_runner.go:195] Run: openssl version
	I1025 10:30:29.344851  474795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294017.pem && ln -fs /usr/share/ca-certificates/294017.pem /etc/ssl/certs/294017.pem"
	I1025 10:30:29.353926  474795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294017.pem
	I1025 10:30:29.357628  474795 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:39 /usr/share/ca-certificates/294017.pem
	I1025 10:30:29.357742  474795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294017.pem
	I1025 10:30:29.400300  474795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294017.pem /etc/ssl/certs/51391683.0"
	I1025 10:30:29.407909  474795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940172.pem && ln -fs /usr/share/ca-certificates/2940172.pem /etc/ssl/certs/2940172.pem"
	I1025 10:30:29.415845  474795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940172.pem
	I1025 10:30:29.420413  474795 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:39 /usr/share/ca-certificates/2940172.pem
	I1025 10:30:29.420528  474795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940172.pem
	I1025 10:30:29.463036  474795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940172.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:30:29.471564  474795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:30:29.480259  474795 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:30:29.484123  474795 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:30:29.484239  474795 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:30:29.525244  474795 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:30:29.532938  474795 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:30:29.536811  474795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:30:29.578254  474795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:30:29.619220  474795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:30:29.660208  474795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:30:29.701380  474795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:30:29.750979  474795 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:30:29.801708  474795 kubeadm.go:400] StartCluster: {Name:old-k8s-version-610853 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-610853 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:30:29.801795  474795 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:30:29.801854  474795 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:30:29.857117  474795 cri.go:89] found id: ""
	I1025 10:30:29.857196  474795 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:30:29.875564  474795 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:30:29.875585  474795 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:30:29.875650  474795 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:30:29.885643  474795 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:30:29.886197  474795 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-610853" does not appear in /home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:30:29.886451  474795 kubeconfig.go:62] /home/jenkins/minikube-integration/21794-292167/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-610853" cluster setting kubeconfig missing "old-k8s-version-610853" context setting]
	I1025 10:30:29.886949  474795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/kubeconfig: {Name:mk3dba620aff9c31a3afd43d9db4d9ff5be75367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:30:29.888756  474795 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:30:29.904651  474795 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1025 10:30:29.904690  474795 kubeadm.go:601] duration metric: took 29.090124ms to restartPrimaryControlPlane
	I1025 10:30:29.904704  474795 kubeadm.go:402] duration metric: took 103.004938ms to StartCluster
	I1025 10:30:29.904719  474795 settings.go:142] acquiring lock: {Name:mkea67a33832f2ea491a4c745ccd836174c61655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:30:29.904784  474795 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:30:29.905772  474795 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/kubeconfig: {Name:mk3dba620aff9c31a3afd43d9db4d9ff5be75367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:30:29.905972  474795 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:30:29.906456  474795 config.go:182] Loaded profile config "old-k8s-version-610853": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1025 10:30:29.906423  474795 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:30:29.906800  474795 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-610853"
	I1025 10:30:29.906882  474795 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-610853"
	W1025 10:30:29.906909  474795 addons.go:247] addon storage-provisioner should already be in state true
	I1025 10:30:29.907006  474795 host.go:66] Checking if "old-k8s-version-610853" exists ...
	I1025 10:30:29.906806  474795 addons.go:69] Setting dashboard=true in profile "old-k8s-version-610853"
	I1025 10:30:29.907081  474795 addons.go:238] Setting addon dashboard=true in "old-k8s-version-610853"
	W1025 10:30:29.907096  474795 addons.go:247] addon dashboard should already be in state true
	I1025 10:30:29.907121  474795 host.go:66] Checking if "old-k8s-version-610853" exists ...
	I1025 10:30:29.907598  474795 cli_runner.go:164] Run: docker container inspect old-k8s-version-610853 --format={{.State.Status}}
	I1025 10:30:29.908068  474795 cli_runner.go:164] Run: docker container inspect old-k8s-version-610853 --format={{.State.Status}}
	I1025 10:30:29.906818  474795 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-610853"
	I1025 10:30:29.908517  474795 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-610853"
	I1025 10:30:29.908788  474795 cli_runner.go:164] Run: docker container inspect old-k8s-version-610853 --format={{.State.Status}}
	I1025 10:30:29.912762  474795 out.go:179] * Verifying Kubernetes components...
	I1025 10:30:29.920676  474795 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:30:29.956127  474795 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-610853"
	W1025 10:30:29.956156  474795 addons.go:247] addon default-storageclass should already be in state true
	I1025 10:30:29.956184  474795 host.go:66] Checking if "old-k8s-version-610853" exists ...
	I1025 10:30:29.956603  474795 cli_runner.go:164] Run: docker container inspect old-k8s-version-610853 --format={{.State.Status}}
	I1025 10:30:29.974006  474795 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 10:30:29.976925  474795 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1025 10:30:29.979851  474795 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 10:30:29.979875  474795 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 10:30:29.979951  474795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610853
	I1025 10:30:30.002057  474795 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:30:30.006968  474795 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:30:30.006995  474795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:30:30.007072  474795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610853
	I1025 10:30:30.024205  474795 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:30:30.024231  474795 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:30:30.024306  474795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610853
	I1025 10:30:30.067088  474795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33427 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/old-k8s-version-610853/id_rsa Username:docker}
	I1025 10:30:30.084610  474795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33427 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/old-k8s-version-610853/id_rsa Username:docker}
	I1025 10:30:30.087478  474795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33427 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/old-k8s-version-610853/id_rsa Username:docker}
	I1025 10:30:30.288062  474795 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:30:30.317822  474795 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-610853" to be "Ready" ...
	I1025 10:30:30.346705  474795 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 10:30:30.346776  474795 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 10:30:30.381407  474795 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 10:30:30.381494  474795 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 10:30:30.399727  474795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:30:30.405253  474795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:30:30.424080  474795 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 10:30:30.424164  474795 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 10:30:30.527727  474795 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 10:30:30.527809  474795 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 10:30:30.556475  474795 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 10:30:30.556549  474795 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 10:30:30.591305  474795 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 10:30:30.591380  474795 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 10:30:30.659906  474795 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 10:30:30.659979  474795 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 10:30:30.718382  474795 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 10:30:30.718454  474795 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 10:30:30.748566  474795 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:30:30.748640  474795 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 10:30:30.771569  474795 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:30:34.313514  474795 node_ready.go:49] node "old-k8s-version-610853" is "Ready"
	I1025 10:30:34.313584  474795 node_ready.go:38] duration metric: took 3.995682892s for node "old-k8s-version-610853" to be "Ready" ...
	I1025 10:30:34.313614  474795 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:30:34.313700  474795 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:30:34.987066  474795 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.587244642s)
	I1025 10:30:35.767395  474795 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.362061141s)
	I1025 10:30:36.536896  474795 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.765232428s)
	I1025 10:30:36.537200  474795 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.223466362s)
	I1025 10:30:36.537220  474795 api_server.go:72] duration metric: took 6.631225534s to wait for apiserver process to appear ...
	I1025 10:30:36.537226  474795 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:30:36.537242  474795 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 10:30:36.540151  474795 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-610853 addons enable metrics-server
	
	I1025 10:30:36.543238  474795 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1025 10:30:36.546997  474795 addons.go:514] duration metric: took 6.640573713s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1025 10:30:36.549348  474795 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1025 10:30:36.550837  474795 api_server.go:141] control plane version: v1.28.0
	I1025 10:30:36.550859  474795 api_server.go:131] duration metric: took 13.627495ms to wait for apiserver health ...
	I1025 10:30:36.550868  474795 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:30:36.556216  474795 system_pods.go:59] 8 kube-system pods found
	I1025 10:30:36.556307  474795 system_pods.go:61] "coredns-5dd5756b68-mp4xx" [339b3875-9aea-4d9d-bd92-87082f232a5e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:30:36.556350  474795 system_pods.go:61] "etcd-old-k8s-version-610853" [dae3daf5-67f7-4923-a88e-d0c16e57bb45] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:30:36.556380  474795 system_pods.go:61] "kindnet-vgctp" [6092762b-d84b-4455-aac9-b17e1c0b90e6] Running
	I1025 10:30:36.556405  474795 system_pods.go:61] "kube-apiserver-old-k8s-version-610853" [8021fc38-9ddb-4c3f-b620-c0e814e3b933] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:30:36.556428  474795 system_pods.go:61] "kube-controller-manager-old-k8s-version-610853" [c59c26ce-37b0-4bc8-bb8d-1f8cebc32435] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:30:36.556464  474795 system_pods.go:61] "kube-proxy-pvxrq" [ea082c00-6806-45fc-96a0-de6cbe2b9afd] Running
	I1025 10:30:36.556495  474795 system_pods.go:61] "kube-scheduler-old-k8s-version-610853" [d566f4ab-65a2-4313-84a7-201466e94cc1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:30:36.556517  474795 system_pods.go:61] "storage-provisioner" [7f2741b4-bcad-4266-9634-4b2aee05a1d7] Running
	I1025 10:30:36.556542  474795 system_pods.go:74] duration metric: took 5.667617ms to wait for pod list to return data ...
	I1025 10:30:36.556576  474795 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:30:36.563391  474795 default_sa.go:45] found service account: "default"
	I1025 10:30:36.563416  474795 default_sa.go:55] duration metric: took 6.810293ms for default service account to be created ...
	I1025 10:30:36.563426  474795 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:30:36.566948  474795 system_pods.go:86] 8 kube-system pods found
	I1025 10:30:36.566976  474795 system_pods.go:89] "coredns-5dd5756b68-mp4xx" [339b3875-9aea-4d9d-bd92-87082f232a5e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:30:36.566986  474795 system_pods.go:89] "etcd-old-k8s-version-610853" [dae3daf5-67f7-4923-a88e-d0c16e57bb45] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:30:36.566992  474795 system_pods.go:89] "kindnet-vgctp" [6092762b-d84b-4455-aac9-b17e1c0b90e6] Running
	I1025 10:30:36.566999  474795 system_pods.go:89] "kube-apiserver-old-k8s-version-610853" [8021fc38-9ddb-4c3f-b620-c0e814e3b933] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:30:36.567015  474795 system_pods.go:89] "kube-controller-manager-old-k8s-version-610853" [c59c26ce-37b0-4bc8-bb8d-1f8cebc32435] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:30:36.567026  474795 system_pods.go:89] "kube-proxy-pvxrq" [ea082c00-6806-45fc-96a0-de6cbe2b9afd] Running
	I1025 10:30:36.567033  474795 system_pods.go:89] "kube-scheduler-old-k8s-version-610853" [d566f4ab-65a2-4313-84a7-201466e94cc1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:30:36.567037  474795 system_pods.go:89] "storage-provisioner" [7f2741b4-bcad-4266-9634-4b2aee05a1d7] Running
	I1025 10:30:36.567044  474795 system_pods.go:126] duration metric: took 3.612288ms to wait for k8s-apps to be running ...
	I1025 10:30:36.567052  474795 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:30:36.567107  474795 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:30:36.582620  474795 system_svc.go:56] duration metric: took 15.559048ms WaitForService to wait for kubelet
	I1025 10:30:36.582644  474795 kubeadm.go:586] duration metric: took 6.676648484s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:30:36.582662  474795 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:30:36.585810  474795 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:30:36.585836  474795 node_conditions.go:123] node cpu capacity is 2
	I1025 10:30:36.585847  474795 node_conditions.go:105] duration metric: took 3.180242ms to run NodePressure ...
	I1025 10:30:36.585861  474795 start.go:241] waiting for startup goroutines ...
	I1025 10:30:36.585868  474795 start.go:246] waiting for cluster config update ...
	I1025 10:30:36.585878  474795 start.go:255] writing updated cluster config ...
	I1025 10:30:36.586141  474795 ssh_runner.go:195] Run: rm -f paused
	I1025 10:30:36.589697  474795 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:30:36.594316  474795 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-mp4xx" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 10:30:38.600266  474795 pod_ready.go:104] pod "coredns-5dd5756b68-mp4xx" is not "Ready", error: <nil>
	W1025 10:30:40.600441  474795 pod_ready.go:104] pod "coredns-5dd5756b68-mp4xx" is not "Ready", error: <nil>
	W1025 10:30:43.100748  474795 pod_ready.go:104] pod "coredns-5dd5756b68-mp4xx" is not "Ready", error: <nil>
	W1025 10:30:45.102117  474795 pod_ready.go:104] pod "coredns-5dd5756b68-mp4xx" is not "Ready", error: <nil>
	W1025 10:30:47.600659  474795 pod_ready.go:104] pod "coredns-5dd5756b68-mp4xx" is not "Ready", error: <nil>
	W1025 10:30:49.601212  474795 pod_ready.go:104] pod "coredns-5dd5756b68-mp4xx" is not "Ready", error: <nil>
	W1025 10:30:51.610200  474795 pod_ready.go:104] pod "coredns-5dd5756b68-mp4xx" is not "Ready", error: <nil>
	W1025 10:30:54.103975  474795 pod_ready.go:104] pod "coredns-5dd5756b68-mp4xx" is not "Ready", error: <nil>
	W1025 10:30:56.600286  474795 pod_ready.go:104] pod "coredns-5dd5756b68-mp4xx" is not "Ready", error: <nil>
	W1025 10:30:59.099726  474795 pod_ready.go:104] pod "coredns-5dd5756b68-mp4xx" is not "Ready", error: <nil>
	W1025 10:31:01.101034  474795 pod_ready.go:104] pod "coredns-5dd5756b68-mp4xx" is not "Ready", error: <nil>
	W1025 10:31:03.600285  474795 pod_ready.go:104] pod "coredns-5dd5756b68-mp4xx" is not "Ready", error: <nil>
	W1025 10:31:06.101781  474795 pod_ready.go:104] pod "coredns-5dd5756b68-mp4xx" is not "Ready", error: <nil>
	W1025 10:31:08.599772  474795 pod_ready.go:104] pod "coredns-5dd5756b68-mp4xx" is not "Ready", error: <nil>
	W1025 10:31:10.600013  474795 pod_ready.go:104] pod "coredns-5dd5756b68-mp4xx" is not "Ready", error: <nil>
	I1025 10:31:11.600400  474795 pod_ready.go:94] pod "coredns-5dd5756b68-mp4xx" is "Ready"
	I1025 10:31:11.600425  474795 pod_ready.go:86] duration metric: took 35.006080498s for pod "coredns-5dd5756b68-mp4xx" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:31:11.603376  474795 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-610853" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:31:11.608287  474795 pod_ready.go:94] pod "etcd-old-k8s-version-610853" is "Ready"
	I1025 10:31:11.608315  474795 pod_ready.go:86] duration metric: took 4.910043ms for pod "etcd-old-k8s-version-610853" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:31:11.611454  474795 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-610853" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:31:11.616451  474795 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-610853" is "Ready"
	I1025 10:31:11.616475  474795 pod_ready.go:86] duration metric: took 4.998225ms for pod "kube-apiserver-old-k8s-version-610853" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:31:11.619688  474795 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-610853" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:31:11.798805  474795 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-610853" is "Ready"
	I1025 10:31:11.798831  474795 pod_ready.go:86] duration metric: took 179.116757ms for pod "kube-controller-manager-old-k8s-version-610853" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:31:11.999917  474795 pod_ready.go:83] waiting for pod "kube-proxy-pvxrq" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:31:12.398999  474795 pod_ready.go:94] pod "kube-proxy-pvxrq" is "Ready"
	I1025 10:31:12.399029  474795 pod_ready.go:86] duration metric: took 399.08217ms for pod "kube-proxy-pvxrq" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:31:12.599608  474795 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-610853" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:31:12.999115  474795 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-610853" is "Ready"
	I1025 10:31:12.999177  474795 pod_ready.go:86] duration metric: took 399.510171ms for pod "kube-scheduler-old-k8s-version-610853" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:31:12.999228  474795 pod_ready.go:40] duration metric: took 36.409491139s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:31:13.055214  474795 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1025 10:31:13.058157  474795 out.go:203] 
	W1025 10:31:13.060992  474795 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1025 10:31:13.063856  474795 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1025 10:31:13.067316  474795 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-610853" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 10:31:07 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:07.657644887Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:31:07 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:07.664637812Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:31:07 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:07.665139406Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:31:07 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:07.682910658Z" level=info msg="Created container ffc56b496e33c1699567d30156945a2d74c4206c408d2790f67085f3a97bf118: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vpwjv/dashboard-metrics-scraper" id=b46a940b-47aa-4a0d-97ba-4294a7784139 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:31:07 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:07.683677257Z" level=info msg="Starting container: ffc56b496e33c1699567d30156945a2d74c4206c408d2790f67085f3a97bf118" id=59b67ddd-8c3d-409d-a657-2b8083d3233d name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:31:07 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:07.685524386Z" level=info msg="Started container" PID=1629 containerID=ffc56b496e33c1699567d30156945a2d74c4206c408d2790f67085f3a97bf118 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vpwjv/dashboard-metrics-scraper id=59b67ddd-8c3d-409d-a657-2b8083d3233d name=/runtime.v1.RuntimeService/StartContainer sandboxID=4e81a197fafb393954120ac05b22a3283e72f859896cbc2b96b762057d9facfe
	Oct 25 10:31:07 old-k8s-version-610853 conmon[1627]: conmon ffc56b496e33c1699567 <ninfo>: container 1629 exited with status 1
	Oct 25 10:31:08 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:08.475922574Z" level=info msg="Removing container: ee23cd9eac7ce911bdc2f280666e2763a7792427c0fefe4c524a057117ed054e" id=0d1d998d-d484-4d99-b31f-94265edd0c89 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:31:08 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:08.488950669Z" level=info msg="Error loading conmon cgroup of container ee23cd9eac7ce911bdc2f280666e2763a7792427c0fefe4c524a057117ed054e: cgroup deleted" id=0d1d998d-d484-4d99-b31f-94265edd0c89 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:31:08 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:08.492188724Z" level=info msg="Removed container ee23cd9eac7ce911bdc2f280666e2763a7792427c0fefe4c524a057117ed054e: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vpwjv/dashboard-metrics-scraper" id=0d1d998d-d484-4d99-b31f-94265edd0c89 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:31:16 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:16.291871064Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:31:16 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:16.296059212Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:31:16 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:16.29611271Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:31:16 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:16.296136571Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:31:16 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:16.299223838Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:31:16 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:16.299258407Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:31:16 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:16.299279347Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:31:16 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:16.302393273Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:31:16 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:16.302433036Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:31:16 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:16.302456757Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:31:16 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:16.305540538Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:31:16 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:16.305578741Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:31:16 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:16.305605047Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:31:16 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:16.308874602Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:31:16 old-k8s-version-610853 crio[650]: time="2025-10-25T10:31:16.308906947Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	ffc56b496e33c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago       Exited              dashboard-metrics-scraper   2                   4e81a197fafb3       dashboard-metrics-scraper-5f989dc9cf-vpwjv       kubernetes-dashboard
	091e16d9863a0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           23 seconds ago       Running             storage-provisioner         2                   b463a1bcf1d99       storage-provisioner                              kube-system
	830e333170d03       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   37 seconds ago       Running             kubernetes-dashboard        0                   ebc0291f419de       kubernetes-dashboard-8694d4445c-r2j5g            kubernetes-dashboard
	94efb18a2e973       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   f2d5af5e2ae34       busybox                                          default
	b4a063cf1b815       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           54 seconds ago       Running             coredns                     1                   d574af27702da       coredns-5dd5756b68-mp4xx                         kube-system
	2fff77ac5e928       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           54 seconds ago       Running             kindnet-cni                 1                   8065ed89825d3       kindnet-vgctp                                    kube-system
	727f02ebb4a44       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           54 seconds ago       Running             kube-proxy                  1                   e2b07b31c3bc9       kube-proxy-pvxrq                                 kube-system
	ac4d66f1ea5f4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           54 seconds ago       Exited              storage-provisioner         1                   b463a1bcf1d99       storage-provisioner                              kube-system
	4e548687fb61e       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   651aa5a157573       etcd-old-k8s-version-610853                      kube-system
	ce552c2cb6e4e       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   346eb58d17ab8       kube-apiserver-old-k8s-version-610853            kube-system
	de79fc3d299d3       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   8c760185afd8b       kube-scheduler-old-k8s-version-610853            kube-system
	01ef458479a2b       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   1e2fcaa7151dd       kube-controller-manager-old-k8s-version-610853   kube-system
	
	
	==> coredns [b4a063cf1b81586de3620a2e35b3fb766dfd73a20da17e5cb8ba258e8c2b2cfe] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60684 - 37659 "HINFO IN 4653879309914932737.8624811222351995741. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.034934892s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-610853
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-610853
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=old-k8s-version-610853
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_29_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:29:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-610853
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:31:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:31:05 +0000   Sat, 25 Oct 2025 10:29:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:31:05 +0000   Sat, 25 Oct 2025 10:29:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:31:05 +0000   Sat, 25 Oct 2025 10:29:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:31:05 +0000   Sat, 25 Oct 2025 10:29:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-610853
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                16c3fb75-2c85-4847-b008-4bbd6334ab71
	  Boot ID:                    3729fb0b-b441-4078-a4f7-ae0fb40e9fa4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-5dd5756b68-mp4xx                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     110s
	  kube-system                 etcd-old-k8s-version-610853                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m5s
	  kube-system                 kindnet-vgctp                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-old-k8s-version-610853             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-controller-manager-old-k8s-version-610853    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-proxy-pvxrq                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-old-k8s-version-610853             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-vpwjv        0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-r2j5g             0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 109s                   kube-proxy       
	  Normal  Starting                 54s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  2m11s (x8 over 2m11s)  kubelet          Node old-k8s-version-610853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m11s (x8 over 2m11s)  kubelet          Node old-k8s-version-610853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m11s (x8 over 2m11s)  kubelet          Node old-k8s-version-610853 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m3s                   kubelet          Node old-k8s-version-610853 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m3s                   kubelet          Node old-k8s-version-610853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s                   kubelet          Node old-k8s-version-610853 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m3s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s                   node-controller  Node old-k8s-version-610853 event: Registered Node old-k8s-version-610853 in Controller
	  Normal  NodeReady                96s                    kubelet          Node old-k8s-version-610853 status is now: NodeReady
	  Normal  Starting                 61s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s (x8 over 61s)      kubelet          Node old-k8s-version-610853 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 61s)      kubelet          Node old-k8s-version-610853 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x8 over 61s)      kubelet          Node old-k8s-version-610853 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           44s                    node-controller  Node old-k8s-version-610853 event: Registered Node old-k8s-version-610853 in Controller
	
	
	==> dmesg <==
	[Oct25 10:04] overlayfs: idmapped layers are currently not supported
	[Oct25 10:09] overlayfs: idmapped layers are currently not supported
	[ +31.156008] overlayfs: idmapped layers are currently not supported
	[Oct25 10:11] overlayfs: idmapped layers are currently not supported
	[Oct25 10:12] overlayfs: idmapped layers are currently not supported
	[Oct25 10:13] overlayfs: idmapped layers are currently not supported
	[Oct25 10:14] overlayfs: idmapped layers are currently not supported
	[Oct25 10:15] overlayfs: idmapped layers are currently not supported
	[ +16.057450] overlayfs: idmapped layers are currently not supported
	[Oct25 10:16] overlayfs: idmapped layers are currently not supported
	[ +24.917476] overlayfs: idmapped layers are currently not supported
	[Oct25 10:17] overlayfs: idmapped layers are currently not supported
	[ +27.290615] overlayfs: idmapped layers are currently not supported
	[Oct25 10:19] overlayfs: idmapped layers are currently not supported
	[Oct25 10:20] overlayfs: idmapped layers are currently not supported
	[Oct25 10:21] overlayfs: idmapped layers are currently not supported
	[Oct25 10:22] overlayfs: idmapped layers are currently not supported
	[ +31.000692] overlayfs: idmapped layers are currently not supported
	[Oct25 10:24] overlayfs: idmapped layers are currently not supported
	[Oct25 10:26] overlayfs: idmapped layers are currently not supported
	[Oct25 10:27] overlayfs: idmapped layers are currently not supported
	[Oct25 10:28] overlayfs: idmapped layers are currently not supported
	[ +15.077774] overlayfs: idmapped layers are currently not supported
	[Oct25 10:29] overlayfs: idmapped layers are currently not supported
	[Oct25 10:30] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [4e548687fb61e724d0492eca4c4b6af8ea0790732c7f2dbc6dd4670e9ee4e668] <==
	{"level":"info","ts":"2025-10-25T10:30:30.549665Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-25T10:30:30.549675Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-25T10:30:30.549871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-25T10:30:30.549926Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-25T10:30:30.549994Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T10:30:30.550019Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-25T10:30:30.553981Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-25T10:30:30.554168Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-25T10:30:30.55419Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-25T10:30:30.554308Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-25T10:30:30.554316Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-25T10:30:31.667204Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-25T10:30:31.667317Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-25T10:30:31.667373Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-25T10:30:31.667413Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-25T10:30:31.667447Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-25T10:30:31.667484Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-25T10:30:31.667515Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-25T10:30:31.671083Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-610853 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-25T10:30:31.671194Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-25T10:30:31.675702Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-25T10:30:31.67121Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-25T10:30:31.6767Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-25T10:30:31.679221Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-25T10:30:31.679289Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 10:31:30 up  2:13,  0 user,  load average: 1.55, 3.15, 2.91
	Linux old-k8s-version-610853 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2fff77ac5e928293117962aeb11abaa056b2eae73a468ecf54a7ac63f46f3a60] <==
	I1025 10:30:36.078372       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:30:36.078617       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 10:30:36.078756       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:30:36.078775       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:30:36.078788       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:30:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:30:36.290693       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:30:36.290766       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:30:36.290832       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:30:36.292298       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 10:31:06.290850       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 10:31:06.290850       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 10:31:06.292073       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 10:31:06.293192       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1025 10:31:07.690989       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:31:07.691033       1 metrics.go:72] Registering metrics
	I1025 10:31:07.691104       1 controller.go:711] "Syncing nftables rules"
	I1025 10:31:16.290827       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:31:16.290876       1 main.go:301] handling current node
	I1025 10:31:26.295287       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:31:26.295319       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ce552c2cb6e4e7361de884db9ef88fd97d4affae078257f79a846fb8bf14e468] <==
	I1025 10:30:34.261053       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1025 10:30:34.270313       1 aggregator.go:166] initial CRD sync complete...
	I1025 10:30:34.271548       1 autoregister_controller.go:141] Starting autoregister controller
	I1025 10:30:34.271612       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 10:30:34.292221       1 shared_informer.go:318] Caches are synced for configmaps
	I1025 10:30:34.292739       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 10:30:34.337565       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:30:34.384895       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:30:34.392201       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1025 10:30:34.392900       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1025 10:30:34.399352       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1025 10:30:34.400037       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1025 10:30:34.400058       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	E1025 10:30:34.408015       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 10:30:35.108502       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:30:36.245248       1 controller.go:624] quota admission added evaluator for: namespaces
	I1025 10:30:36.297110       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1025 10:30:36.336244       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:30:36.374746       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:30:36.403675       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1025 10:30:36.505020       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.13.37"}
	I1025 10:30:36.527551       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.102.223"}
	I1025 10:30:47.111257       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1025 10:30:47.160898       1 controller.go:624] quota admission added evaluator for: endpoints
	I1025 10:30:47.373761       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [01ef458479a2b900515d032a9c6b16080bf3eecf88bec56c1db80e3b57c927a1] <==
	I1025 10:30:47.319790       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-r2j5g"
	I1025 10:30:47.327514       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="455.01016ms"
	I1025 10:30:47.327614       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-vpwjv"
	I1025 10:30:47.327792       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="171.3µs"
	I1025 10:30:47.342015       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="222.607828ms"
	I1025 10:30:47.343715       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="227.439463ms"
	I1025 10:30:47.367876       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="23.005977ms"
	I1025 10:30:47.370314       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="74.652µs"
	I1025 10:30:47.370860       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="42.913µs"
	I1025 10:30:47.370774       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 10:30:47.370962       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1025 10:30:47.398018       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	I1025 10:30:47.398412       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="56.274263ms"
	I1025 10:30:47.401851       1 shared_informer.go:318] Caches are synced for garbage collector
	I1025 10:30:47.416458       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="17.896514ms"
	I1025 10:30:47.416555       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.071µs"
	I1025 10:30:52.449827       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="11.291556ms"
	I1025 10:30:52.450090       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="63.484µs"
	I1025 10:30:57.451357       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="66.766µs"
	I1025 10:30:58.460104       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="62.106µs"
	I1025 10:30:59.463005       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="75.242µs"
	I1025 10:31:08.508130       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="57.65µs"
	I1025 10:31:11.196474       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.788046ms"
	I1025 10:31:11.197607       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="54.794µs"
	I1025 10:31:17.672041       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="57.699µs"
	
	
	==> kube-proxy [727f02ebb4a44e8230fd46ee0e62a5d410cc1ab651fb405b54edd36cb5b76a9b] <==
	I1025 10:30:36.092068       1 server_others.go:69] "Using iptables proxy"
	I1025 10:30:36.111814       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1025 10:30:36.180557       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:30:36.195667       1 server_others.go:152] "Using iptables Proxier"
	I1025 10:30:36.195771       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1025 10:30:36.195804       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1025 10:30:36.195858       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1025 10:30:36.196151       1 server.go:846] "Version info" version="v1.28.0"
	I1025 10:30:36.196202       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:30:36.198611       1 config.go:188] "Starting service config controller"
	I1025 10:30:36.198633       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1025 10:30:36.198650       1 config.go:97] "Starting endpoint slice config controller"
	I1025 10:30:36.198654       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1025 10:30:36.198979       1 config.go:315] "Starting node config controller"
	I1025 10:30:36.198994       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1025 10:30:36.299350       1 shared_informer.go:318] Caches are synced for node config
	I1025 10:30:36.299389       1 shared_informer.go:318] Caches are synced for service config
	I1025 10:30:36.299431       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [de79fc3d299d345572eba9b73c7727595ed89922ab269e444d5396b944bf1644] <==
	W1025 10:30:34.303092       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1025 10:30:34.303126       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1025 10:30:34.303136       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1025 10:30:34.303199       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1025 10:30:34.303257       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1025 10:30:34.303293       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1025 10:30:34.303375       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1025 10:30:34.303410       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1025 10:30:34.303478       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1025 10:30:34.303512       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1025 10:30:34.303577       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1025 10:30:34.303612       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1025 10:30:34.303694       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1025 10:30:34.303986       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1025 10:30:34.303298       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1025 10:30:34.304112       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1025 10:30:34.304072       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1025 10:30:34.304177       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1025 10:30:34.304210       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1025 10:30:34.304238       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1025 10:30:34.304313       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1025 10:30:34.304333       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1025 10:30:34.304384       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1025 10:30:34.304421       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I1025 10:30:35.874545       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 25 10:30:41 old-k8s-version-610853 kubelet[778]: I1025 10:30:41.168670     778 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 25 10:30:47 old-k8s-version-610853 kubelet[778]: I1025 10:30:47.325869     778 topology_manager.go:215] "Topology Admit Handler" podUID="b64430c6-825f-484b-9d66-8eb521ff792f" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-r2j5g"
	Oct 25 10:30:47 old-k8s-version-610853 kubelet[778]: I1025 10:30:47.353093     778 topology_manager.go:215] "Topology Admit Handler" podUID="ff8834ac-0c12-4190-83d6-bf94a1049287" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-vpwjv"
	Oct 25 10:30:47 old-k8s-version-610853 kubelet[778]: I1025 10:30:47.401156     778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jv22\" (UniqueName: \"kubernetes.io/projected/b64430c6-825f-484b-9d66-8eb521ff792f-kube-api-access-4jv22\") pod \"kubernetes-dashboard-8694d4445c-r2j5g\" (UID: \"b64430c6-825f-484b-9d66-8eb521ff792f\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-r2j5g"
	Oct 25 10:30:47 old-k8s-version-610853 kubelet[778]: I1025 10:30:47.401358     778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b64430c6-825f-484b-9d66-8eb521ff792f-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-r2j5g\" (UID: \"b64430c6-825f-484b-9d66-8eb521ff792f\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-r2j5g"
	Oct 25 10:30:47 old-k8s-version-610853 kubelet[778]: I1025 10:30:47.502379     778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6j4m\" (UniqueName: \"kubernetes.io/projected/ff8834ac-0c12-4190-83d6-bf94a1049287-kube-api-access-m6j4m\") pod \"dashboard-metrics-scraper-5f989dc9cf-vpwjv\" (UID: \"ff8834ac-0c12-4190-83d6-bf94a1049287\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vpwjv"
	Oct 25 10:30:47 old-k8s-version-610853 kubelet[778]: I1025 10:30:47.502596     778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ff8834ac-0c12-4190-83d6-bf94a1049287-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-vpwjv\" (UID: \"ff8834ac-0c12-4190-83d6-bf94a1049287\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vpwjv"
	Oct 25 10:30:47 old-k8s-version-610853 kubelet[778]: W1025 10:30:47.675578     778 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/d9ac8e10f5b1ea2965bab805d608fa83def6ed75bd1273d0c136ee442b0b45b2/crio-4e81a197fafb393954120ac05b22a3283e72f859896cbc2b96b762057d9facfe WatchSource:0}: Error finding container 4e81a197fafb393954120ac05b22a3283e72f859896cbc2b96b762057d9facfe: Status 404 returned error can't find the container with id 4e81a197fafb393954120ac05b22a3283e72f859896cbc2b96b762057d9facfe
	Oct 25 10:30:57 old-k8s-version-610853 kubelet[778]: I1025 10:30:57.430636     778 scope.go:117] "RemoveContainer" containerID="06feaf51a843fee5e053ff4ac724eaa39ea49aee6810dd9f7b44fd453a4e4e20"
	Oct 25 10:30:57 old-k8s-version-610853 kubelet[778]: I1025 10:30:57.450651     778 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-r2j5g" podStartSLOduration=5.842347327 podCreationTimestamp="2025-10-25 10:30:47 +0000 UTC" firstStartedPulling="2025-10-25 10:30:47.655088664 +0000 UTC m=+18.562252866" lastFinishedPulling="2025-10-25 10:30:52.26260767 +0000 UTC m=+23.169771872" observedRunningTime="2025-10-25 10:30:52.437106737 +0000 UTC m=+23.344270939" watchObservedRunningTime="2025-10-25 10:30:57.449866333 +0000 UTC m=+28.357030527"
	Oct 25 10:30:58 old-k8s-version-610853 kubelet[778]: I1025 10:30:58.441334     778 scope.go:117] "RemoveContainer" containerID="ee23cd9eac7ce911bdc2f280666e2763a7792427c0fefe4c524a057117ed054e"
	Oct 25 10:30:58 old-k8s-version-610853 kubelet[778]: I1025 10:30:58.441856     778 scope.go:117] "RemoveContainer" containerID="06feaf51a843fee5e053ff4ac724eaa39ea49aee6810dd9f7b44fd453a4e4e20"
	Oct 25 10:30:58 old-k8s-version-610853 kubelet[778]: E1025 10:30:58.442281     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vpwjv_kubernetes-dashboard(ff8834ac-0c12-4190-83d6-bf94a1049287)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vpwjv" podUID="ff8834ac-0c12-4190-83d6-bf94a1049287"
	Oct 25 10:30:59 old-k8s-version-610853 kubelet[778]: I1025 10:30:59.449288     778 scope.go:117] "RemoveContainer" containerID="ee23cd9eac7ce911bdc2f280666e2763a7792427c0fefe4c524a057117ed054e"
	Oct 25 10:30:59 old-k8s-version-610853 kubelet[778]: E1025 10:30:59.449629     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vpwjv_kubernetes-dashboard(ff8834ac-0c12-4190-83d6-bf94a1049287)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vpwjv" podUID="ff8834ac-0c12-4190-83d6-bf94a1049287"
	Oct 25 10:31:06 old-k8s-version-610853 kubelet[778]: I1025 10:31:06.465310     778 scope.go:117] "RemoveContainer" containerID="ac4d66f1ea5f4c3937ad11f2c045143cdcc911646adefc4dc242e213d38acdda"
	Oct 25 10:31:07 old-k8s-version-610853 kubelet[778]: I1025 10:31:07.654655     778 scope.go:117] "RemoveContainer" containerID="ee23cd9eac7ce911bdc2f280666e2763a7792427c0fefe4c524a057117ed054e"
	Oct 25 10:31:08 old-k8s-version-610853 kubelet[778]: I1025 10:31:08.473701     778 scope.go:117] "RemoveContainer" containerID="ee23cd9eac7ce911bdc2f280666e2763a7792427c0fefe4c524a057117ed054e"
	Oct 25 10:31:08 old-k8s-version-610853 kubelet[778]: I1025 10:31:08.474006     778 scope.go:117] "RemoveContainer" containerID="ffc56b496e33c1699567d30156945a2d74c4206c408d2790f67085f3a97bf118"
	Oct 25 10:31:08 old-k8s-version-610853 kubelet[778]: E1025 10:31:08.474331     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vpwjv_kubernetes-dashboard(ff8834ac-0c12-4190-83d6-bf94a1049287)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vpwjv" podUID="ff8834ac-0c12-4190-83d6-bf94a1049287"
	Oct 25 10:31:17 old-k8s-version-610853 kubelet[778]: I1025 10:31:17.655145     778 scope.go:117] "RemoveContainer" containerID="ffc56b496e33c1699567d30156945a2d74c4206c408d2790f67085f3a97bf118"
	Oct 25 10:31:17 old-k8s-version-610853 kubelet[778]: E1025 10:31:17.655493     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vpwjv_kubernetes-dashboard(ff8834ac-0c12-4190-83d6-bf94a1049287)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vpwjv" podUID="ff8834ac-0c12-4190-83d6-bf94a1049287"
	Oct 25 10:31:25 old-k8s-version-610853 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 10:31:25 old-k8s-version-610853 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 10:31:25 old-k8s-version-610853 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [830e333170d03998870a50498589369cec9f3aec50bea277636833a4af430c9d] <==
	2025/10/25 10:30:52 Starting overwatch
	2025/10/25 10:30:52 Using namespace: kubernetes-dashboard
	2025/10/25 10:30:52 Using in-cluster config to connect to apiserver
	2025/10/25 10:30:52 Using secret token for csrf signing
	2025/10/25 10:30:52 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 10:30:52 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 10:30:52 Successful initial request to the apiserver, version: v1.28.0
	2025/10/25 10:30:52 Generating JWE encryption key
	2025/10/25 10:30:52 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 10:30:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 10:30:53 Initializing JWE encryption key from synchronized object
	2025/10/25 10:30:53 Creating in-cluster Sidecar client
	2025/10/25 10:30:53 Serving insecurely on HTTP port: 9090
	2025/10/25 10:30:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:31:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [091e16d9863a0d528c1db558671f37699e8fef853fcec9f0ddb84719849a6993] <==
	I1025 10:31:06.529173       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 10:31:06.542499       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 10:31:06.543661       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1025 10:31:23.947538       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 10:31:23.947781       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-610853_8a3b1d8f-6580-4bbf-bdb0-9ccece0dd268!
	I1025 10:31:23.951643       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"24744037-24e8-4570-96d7-2db397f7e01e", APIVersion:"v1", ResourceVersion:"657", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-610853_8a3b1d8f-6580-4bbf-bdb0-9ccece0dd268 became leader
	I1025 10:31:24.050412       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-610853_8a3b1d8f-6580-4bbf-bdb0-9ccece0dd268!
	
	
	==> storage-provisioner [ac4d66f1ea5f4c3937ad11f2c045143cdcc911646adefc4dc242e213d38acdda] <==
	I1025 10:30:35.960087       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 10:31:05.962540       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-610853 -n old-k8s-version-610853
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-610853 -n old-k8s-version-610853: exit status 2 (376.27003ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-610853 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-204074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-204074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (266.713977ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:33:11Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-204074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-204074 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-204074 describe deploy/metrics-server -n kube-system: exit status 1 (85.269881ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-204074 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-204074
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-204074:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "114adef2e3f9f4d970639b5c8a68c00b64371efcafc97f0bb2c3652589f1f63a",
	        "Created": "2025-10-25T10:31:40.749344043Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 478915,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:31:40.849998147Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/114adef2e3f9f4d970639b5c8a68c00b64371efcafc97f0bb2c3652589f1f63a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/114adef2e3f9f4d970639b5c8a68c00b64371efcafc97f0bb2c3652589f1f63a/hostname",
	        "HostsPath": "/var/lib/docker/containers/114adef2e3f9f4d970639b5c8a68c00b64371efcafc97f0bb2c3652589f1f63a/hosts",
	        "LogPath": "/var/lib/docker/containers/114adef2e3f9f4d970639b5c8a68c00b64371efcafc97f0bb2c3652589f1f63a/114adef2e3f9f4d970639b5c8a68c00b64371efcafc97f0bb2c3652589f1f63a-json.log",
	        "Name": "/default-k8s-diff-port-204074",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-204074:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-204074",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "114adef2e3f9f4d970639b5c8a68c00b64371efcafc97f0bb2c3652589f1f63a",
	                "LowerDir": "/var/lib/docker/overlay2/e85a7252b6645d35aa69af6f246969581b36f8fafff02d78082a3757e2e49a13-init/diff:/var/lib/docker/overlay2/56244b4f60319ba406f5ebe406da3f45bd5764c167111669ae2d84794cf94518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e85a7252b6645d35aa69af6f246969581b36f8fafff02d78082a3757e2e49a13/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e85a7252b6645d35aa69af6f246969581b36f8fafff02d78082a3757e2e49a13/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e85a7252b6645d35aa69af6f246969581b36f8fafff02d78082a3757e2e49a13/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-204074",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-204074/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-204074",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-204074",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-204074",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b5815d71793d03735a949009d0b927fdf588df78bcdc16c3fd6cf8d2c3b5c5e0",
	            "SandboxKey": "/var/run/docker/netns/b5815d71793d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-204074": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:7f:94:5b:f5:1a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a8d6d82e4f1c3e18dd593c28bd34ec865e52f7ca53dce62df012fba5b98ee7a9",
	                    "EndpointID": "863945cf53832ad156e6664f1160c241cd889b3e083f6f13e9cdaf989ab709c1",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-204074",
	                        "114adef2e3f9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-204074 -n default-k8s-diff-port-204074
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-204074 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-204074 logs -n 25: (1.226318425s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-821614 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-821614                │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │                     │
	│ ssh     │ -p cilium-821614 sudo crio config                                                                                                                                                                                                             │ cilium-821614                │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │                     │
	│ delete  │ -p cilium-821614                                                                                                                                                                                                                              │ cilium-821614                │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │ 25 Oct 25 10:27 UTC │
	│ start   │ -p force-systemd-env-068963 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-068963     │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │ 25 Oct 25 10:28 UTC │
	│ delete  │ -p kubernetes-upgrade-845331                                                                                                                                                                                                                  │ kubernetes-upgrade-845331    │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │ 25 Oct 25 10:28 UTC │
	│ start   │ -p cert-expiration-313068 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-313068       │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:28 UTC │
	│ delete  │ -p force-systemd-env-068963                                                                                                                                                                                                                   │ force-systemd-env-068963     │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:28 UTC │
	│ start   │ -p cert-options-506318 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-506318          │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:28 UTC │
	│ ssh     │ cert-options-506318 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-506318          │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:28 UTC │
	│ ssh     │ -p cert-options-506318 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-506318          │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:28 UTC │
	│ delete  │ -p cert-options-506318                                                                                                                                                                                                                        │ cert-options-506318          │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:28 UTC │
	│ start   │ -p old-k8s-version-610853 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:29 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-610853 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:30 UTC │                     │
	│ stop    │ -p old-k8s-version-610853 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:30 UTC │ 25 Oct 25 10:30 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-610853 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:30 UTC │ 25 Oct 25 10:30 UTC │
	│ start   │ -p old-k8s-version-610853 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:30 UTC │ 25 Oct 25 10:31 UTC │
	│ image   │ old-k8s-version-610853 image list --format=json                                                                                                                                                                                               │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │ 25 Oct 25 10:31 UTC │
	│ pause   │ -p old-k8s-version-610853 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │                     │
	│ delete  │ -p old-k8s-version-610853                                                                                                                                                                                                                     │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │ 25 Oct 25 10:31 UTC │
	│ delete  │ -p old-k8s-version-610853                                                                                                                                                                                                                     │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │ 25 Oct 25 10:31 UTC │
	│ start   │ -p default-k8s-diff-port-204074 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │ 25 Oct 25 10:33 UTC │
	│ start   │ -p cert-expiration-313068 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-313068       │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │ 25 Oct 25 10:32 UTC │
	│ delete  │ -p cert-expiration-313068                                                                                                                                                                                                                     │ cert-expiration-313068       │ jenkins │ v1.37.0 │ 25 Oct 25 10:32 UTC │ 25 Oct 25 10:32 UTC │
	│ start   │ -p embed-certs-419185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:32 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-204074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:32:14
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:32:14.994334  481784 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:32:14.994479  481784 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:32:14.994486  481784 out.go:374] Setting ErrFile to fd 2...
	I1025 10:32:14.994492  481784 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:32:14.994754  481784 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 10:32:14.995216  481784 out.go:368] Setting JSON to false
	I1025 10:32:14.996176  481784 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8085,"bootTime":1761380250,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1025 10:32:14.996249  481784 start.go:141] virtualization:  
	I1025 10:32:15.000249  481784 out.go:179] * [embed-certs-419185] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:32:15.004253  481784 notify.go:220] Checking for updates...
	I1025 10:32:15.008780  481784 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 10:32:15.013048  481784 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:32:15.018844  481784 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:32:15.025632  481784 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-292167/.minikube
	I1025 10:32:15.028883  481784 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:32:15.033485  481784 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:32:15.037269  481784 config.go:182] Loaded profile config "default-k8s-diff-port-204074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:32:15.037411  481784 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:32:15.103275  481784 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:32:15.103417  481784 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:32:15.208986  481784 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:32:15.199228669 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:32:15.209087  481784 docker.go:318] overlay module found
	I1025 10:32:15.212430  481784 out.go:179] * Using the docker driver based on user configuration
	I1025 10:32:15.215360  481784 start.go:305] selected driver: docker
	I1025 10:32:15.215375  481784 start.go:925] validating driver "docker" against <nil>
	I1025 10:32:15.215395  481784 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:32:15.216115  481784 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:32:15.295896  481784 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:32:15.28673378 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:32:15.296055  481784 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 10:32:15.296319  481784 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:32:15.299567  481784 out.go:179] * Using Docker driver with root privileges
	I1025 10:32:15.302337  481784 cni.go:84] Creating CNI manager for ""
	I1025 10:32:15.302413  481784 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:32:15.302427  481784 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 10:32:15.302511  481784 start.go:349] cluster config:
	{Name:embed-certs-419185 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-419185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:32:15.305744  481784 out.go:179] * Starting "embed-certs-419185" primary control-plane node in "embed-certs-419185" cluster
	I1025 10:32:15.308557  481784 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:32:15.311480  481784 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:32:15.314314  481784 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:32:15.314325  481784 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:32:15.314366  481784 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 10:32:15.314376  481784 cache.go:58] Caching tarball of preloaded images
	I1025 10:32:15.314443  481784 preload.go:233] Found /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:32:15.314453  481784 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:32:15.314568  481784 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/config.json ...
	I1025 10:32:15.314585  481784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/config.json: {Name:mkef671a280b635d5b7d11f7a1033beefcc2793c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:32:15.339768  481784 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:32:15.339795  481784 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:32:15.339823  481784 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:32:15.339854  481784 start.go:360] acquireMachinesLock for embed-certs-419185: {Name:mk5a130bf45ea43a164134eaf1f0ed9a364dff5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:32:15.339973  481784 start.go:364] duration metric: took 98.791µs to acquireMachinesLock for "embed-certs-419185"
	I1025 10:32:15.340002  481784 start.go:93] Provisioning new machine with config: &{Name:embed-certs-419185 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-419185 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:32:15.340080  481784 start.go:125] createHost starting for "" (driver="docker")
	I1025 10:32:14.904111  478351 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 10:32:15.441820  478351 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 10:32:15.442349  478351 kubeadm.go:318] 
	I1025 10:32:15.442419  478351 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 10:32:15.442425  478351 kubeadm.go:318] 
	I1025 10:32:15.442505  478351 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 10:32:15.442520  478351 kubeadm.go:318] 
	I1025 10:32:15.442547  478351 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 10:32:15.442647  478351 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 10:32:15.442700  478351 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 10:32:15.442704  478351 kubeadm.go:318] 
	I1025 10:32:15.442760  478351 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 10:32:15.442764  478351 kubeadm.go:318] 
	I1025 10:32:15.442813  478351 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 10:32:15.442818  478351 kubeadm.go:318] 
	I1025 10:32:15.442872  478351 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 10:32:15.442950  478351 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 10:32:15.443021  478351 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 10:32:15.443026  478351 kubeadm.go:318] 
	I1025 10:32:15.443113  478351 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 10:32:15.443217  478351 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 10:32:15.443223  478351 kubeadm.go:318] 
	I1025 10:32:15.443337  478351 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token lt2o9z.1offt2v34gm8q3hv \
	I1025 10:32:15.443444  478351 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3fbde57ac93e639221cdfe118779e4c6254e9915ac4b07aafbe95a9f86bce8d0 \
	I1025 10:32:15.443465  478351 kubeadm.go:318] 	--control-plane 
	I1025 10:32:15.443469  478351 kubeadm.go:318] 
	I1025 10:32:15.443557  478351 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 10:32:15.443561  478351 kubeadm.go:318] 
	I1025 10:32:15.443646  478351 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token lt2o9z.1offt2v34gm8q3hv \
	I1025 10:32:15.443760  478351 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3fbde57ac93e639221cdfe118779e4c6254e9915ac4b07aafbe95a9f86bce8d0 
	I1025 10:32:15.449158  478351 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1025 10:32:15.449400  478351 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1025 10:32:15.449509  478351 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 10:32:15.449524  478351 cni.go:84] Creating CNI manager for ""
	I1025 10:32:15.449532  478351 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:32:15.452513  478351 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 10:32:15.459596  478351 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 10:32:15.464386  478351 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 10:32:15.464404  478351 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 10:32:15.487232  478351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 10:32:15.919908  478351 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 10:32:15.920033  478351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:32:15.920133  478351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-204074 minikube.k8s.io/updated_at=2025_10_25T10_32_15_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53 minikube.k8s.io/name=default-k8s-diff-port-204074 minikube.k8s.io/primary=true
	I1025 10:32:16.163583  478351 ops.go:34] apiserver oom_adj: -16
	I1025 10:32:16.163715  478351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:32:16.664386  478351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:32:17.164003  478351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:32:17.663888  478351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:32:18.163807  478351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:32:18.664363  478351 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:32:18.907636  478351 kubeadm.go:1113] duration metric: took 2.987646964s to wait for elevateKubeSystemPrivileges
	I1025 10:32:18.907670  478351 kubeadm.go:402] duration metric: took 28.336769132s to StartCluster
	I1025 10:32:18.907688  478351 settings.go:142] acquiring lock: {Name:mkea67a33832f2ea491a4c745ccd836174c61655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:32:18.907748  478351 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:32:18.908412  478351 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/kubeconfig: {Name:mk3dba620aff9c31a3afd43d9db4d9ff5be75367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:32:18.908624  478351 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:32:18.908757  478351 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 10:32:18.909009  478351 config.go:182] Loaded profile config "default-k8s-diff-port-204074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:32:18.909047  478351 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:32:18.909111  478351 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-204074"
	I1025 10:32:18.909132  478351 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-204074"
	I1025 10:32:18.909155  478351 host.go:66] Checking if "default-k8s-diff-port-204074" exists ...
	I1025 10:32:18.909619  478351 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-204074 --format={{.State.Status}}
	I1025 10:32:18.910162  478351 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-204074"
	I1025 10:32:18.910180  478351 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-204074"
	I1025 10:32:18.910450  478351 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-204074 --format={{.State.Status}}
	I1025 10:32:18.913627  478351 out.go:179] * Verifying Kubernetes components...
	I1025 10:32:18.919586  478351 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:32:18.954669  478351 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-204074"
	I1025 10:32:18.954710  478351 host.go:66] Checking if "default-k8s-diff-port-204074" exists ...
	I1025 10:32:18.955143  478351 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-204074 --format={{.State.Status}}
	I1025 10:32:18.960210  478351 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:32:18.965771  478351 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:32:18.965794  478351 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:32:18.965858  478351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-204074
	I1025 10:32:18.998357  478351 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:32:18.998377  478351 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:32:18.998439  478351 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-204074
	I1025 10:32:19.018380  478351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33432 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/default-k8s-diff-port-204074/id_rsa Username:docker}
	I1025 10:32:19.047513  478351 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33432 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/default-k8s-diff-port-204074/id_rsa Username:docker}
	I1025 10:32:19.517204  478351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:32:19.549140  478351 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 10:32:19.549319  478351 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:32:15.343409  481784 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 10:32:15.343658  481784 start.go:159] libmachine.API.Create for "embed-certs-419185" (driver="docker")
	I1025 10:32:15.343698  481784 client.go:168] LocalClient.Create starting
	I1025 10:32:15.343789  481784 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem
	I1025 10:32:15.343828  481784 main.go:141] libmachine: Decoding PEM data...
	I1025 10:32:15.343841  481784 main.go:141] libmachine: Parsing certificate...
	I1025 10:32:15.343986  481784 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem
	I1025 10:32:15.344016  481784 main.go:141] libmachine: Decoding PEM data...
	I1025 10:32:15.344029  481784 main.go:141] libmachine: Parsing certificate...
	I1025 10:32:15.344430  481784 cli_runner.go:164] Run: docker network inspect embed-certs-419185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 10:32:15.362183  481784 cli_runner.go:211] docker network inspect embed-certs-419185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 10:32:15.362275  481784 network_create.go:284] running [docker network inspect embed-certs-419185] to gather additional debugging logs...
	I1025 10:32:15.362296  481784 cli_runner.go:164] Run: docker network inspect embed-certs-419185
	W1025 10:32:15.394899  481784 cli_runner.go:211] docker network inspect embed-certs-419185 returned with exit code 1
	I1025 10:32:15.394935  481784 network_create.go:287] error running [docker network inspect embed-certs-419185]: docker network inspect embed-certs-419185: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-419185 not found
	I1025 10:32:15.394950  481784 network_create.go:289] output of [docker network inspect embed-certs-419185]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-419185 not found
	
	** /stderr **
	I1025 10:32:15.395057  481784 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:32:15.427560  481784 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-101b69e1e09b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d2:49:dd:18:ab:21} reservation:<nil>}
	I1025 10:32:15.427822  481784 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fee160f0176f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:62:3b:42:f8:f5:b2} reservation:<nil>}
	I1025 10:32:15.428177  481784 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5368f13f34ad IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2e:30:83:3d:80:30} reservation:<nil>}
	I1025 10:32:15.428593  481784 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019a2cb0}
	I1025 10:32:15.428613  481784 network_create.go:124] attempt to create docker network embed-certs-419185 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 10:32:15.428676  481784 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-419185 embed-certs-419185
	I1025 10:32:15.515646  481784 network_create.go:108] docker network embed-certs-419185 192.168.76.0/24 created
	I1025 10:32:15.515685  481784 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-419185" container
	I1025 10:32:15.515756  481784 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 10:32:15.537455  481784 cli_runner.go:164] Run: docker volume create embed-certs-419185 --label name.minikube.sigs.k8s.io=embed-certs-419185 --label created_by.minikube.sigs.k8s.io=true
	I1025 10:32:15.563743  481784 oci.go:103] Successfully created a docker volume embed-certs-419185
	I1025 10:32:15.563841  481784 cli_runner.go:164] Run: docker run --rm --name embed-certs-419185-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-419185 --entrypoint /usr/bin/test -v embed-certs-419185:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 10:32:16.296243  481784 oci.go:107] Successfully prepared a docker volume embed-certs-419185
	I1025 10:32:16.296275  481784 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:32:16.296294  481784 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 10:32:16.296367  481784 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-419185:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 10:32:19.593488  478351 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:32:20.659554  478351 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.110192599s)
	I1025 10:32:20.659634  478351 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.110426431s)
	I1025 10:32:20.659709  478351 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1025 10:32:20.659663  478351 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.066154103s)
	I1025 10:32:20.663233  478351 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.145919556s)
	I1025 10:32:20.663421  478351 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-204074" to be "Ready" ...
	I1025 10:32:20.746259  478351 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1025 10:32:20.772579  478351 addons.go:514] duration metric: took 1.863513862s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1025 10:32:21.164236  478351 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-204074" context rescaled to 1 replicas
	W1025 10:32:22.666472  478351 node_ready.go:57] node "default-k8s-diff-port-204074" has "Ready":"False" status (will retry)
	I1025 10:32:21.099239  481784 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-419185:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.802835212s)
	I1025 10:32:21.099273  481784 kic.go:203] duration metric: took 4.802976432s to extract preloaded images to volume ...
	W1025 10:32:21.099413  481784 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1025 10:32:21.099524  481784 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 10:32:21.168426  481784 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-419185 --name embed-certs-419185 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-419185 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-419185 --network embed-certs-419185 --ip 192.168.76.2 --volume embed-certs-419185:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 10:32:21.507698  481784 cli_runner.go:164] Run: docker container inspect embed-certs-419185 --format={{.State.Running}}
	I1025 10:32:21.529492  481784 cli_runner.go:164] Run: docker container inspect embed-certs-419185 --format={{.State.Status}}
	I1025 10:32:21.554766  481784 cli_runner.go:164] Run: docker exec embed-certs-419185 stat /var/lib/dpkg/alternatives/iptables
	I1025 10:32:21.627977  481784 oci.go:144] the created container "embed-certs-419185" has a running status.
	I1025 10:32:21.628011  481784 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21794-292167/.minikube/machines/embed-certs-419185/id_rsa...
	I1025 10:32:21.860311  481784 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21794-292167/.minikube/machines/embed-certs-419185/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 10:32:21.884895  481784 cli_runner.go:164] Run: docker container inspect embed-certs-419185 --format={{.State.Status}}
	I1025 10:32:21.916555  481784 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 10:32:21.916578  481784 kic_runner.go:114] Args: [docker exec --privileged embed-certs-419185 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 10:32:21.976364  481784 cli_runner.go:164] Run: docker container inspect embed-certs-419185 --format={{.State.Status}}
	I1025 10:32:21.997299  481784 machine.go:93] provisionDockerMachine start ...
	I1025 10:32:21.997390  481784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-419185
	I1025 10:32:22.023452  481784 main.go:141] libmachine: Using SSH client type: native
	I1025 10:32:22.023815  481784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33437 <nil> <nil>}
	I1025 10:32:22.023833  481784 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:32:22.024771  481784 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1025 10:32:25.182909  481784 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-419185
	
	I1025 10:32:25.182935  481784 ubuntu.go:182] provisioning hostname "embed-certs-419185"
	I1025 10:32:25.183017  481784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-419185
	I1025 10:32:25.201694  481784 main.go:141] libmachine: Using SSH client type: native
	I1025 10:32:25.202015  481784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33437 <nil> <nil>}
	I1025 10:32:25.202032  481784 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-419185 && echo "embed-certs-419185" | sudo tee /etc/hostname
	I1025 10:32:25.369075  481784 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-419185
	
	I1025 10:32:25.369160  481784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-419185
	I1025 10:32:25.387692  481784 main.go:141] libmachine: Using SSH client type: native
	I1025 10:32:25.388021  481784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33437 <nil> <nil>}
	I1025 10:32:25.388044  481784 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-419185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-419185/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-419185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:32:25.539522  481784 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:32:25.539549  481784 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-292167/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-292167/.minikube}
	I1025 10:32:25.539579  481784 ubuntu.go:190] setting up certificates
	I1025 10:32:25.539589  481784 provision.go:84] configureAuth start
	I1025 10:32:25.539657  481784 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-419185
	I1025 10:32:25.556826  481784 provision.go:143] copyHostCerts
	I1025 10:32:25.556897  481784 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem, removing ...
	I1025 10:32:25.556906  481784 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem
	I1025 10:32:25.556986  481784 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem (1082 bytes)
	I1025 10:32:25.557086  481784 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem, removing ...
	I1025 10:32:25.557091  481784 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem
	I1025 10:32:25.557119  481784 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem (1123 bytes)
	I1025 10:32:25.557183  481784 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem, removing ...
	I1025 10:32:25.557188  481784 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem
	I1025 10:32:25.557325  481784 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem (1675 bytes)
	I1025 10:32:25.557405  481784 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem org=jenkins.embed-certs-419185 san=[127.0.0.1 192.168.76.2 embed-certs-419185 localhost minikube]
	I1025 10:32:25.835973  481784 provision.go:177] copyRemoteCerts
	I1025 10:32:25.836054  481784 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:32:25.836107  481784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-419185
	I1025 10:32:25.852956  481784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33437 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/embed-certs-419185/id_rsa Username:docker}
	I1025 10:32:25.954994  481784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 10:32:25.972096  481784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1025 10:32:25.988957  481784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 10:32:26.009158  481784 provision.go:87] duration metric: took 469.533037ms to configureAuth
	I1025 10:32:26.009187  481784 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:32:26.009411  481784 config.go:182] Loaded profile config "embed-certs-419185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:32:26.009545  481784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-419185
	I1025 10:32:26.029122  481784 main.go:141] libmachine: Using SSH client type: native
	I1025 10:32:26.029435  481784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33437 <nil> <nil>}
	I1025 10:32:26.029455  481784 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:32:26.301885  481784 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:32:26.301956  481784 machine.go:96] duration metric: took 4.304634134s to provisionDockerMachine
	I1025 10:32:26.301981  481784 client.go:171] duration metric: took 10.95827598s to LocalClient.Create
	I1025 10:32:26.302033  481784 start.go:167] duration metric: took 10.958373999s to libmachine.API.Create "embed-certs-419185"
	I1025 10:32:26.302044  481784 start.go:293] postStartSetup for "embed-certs-419185" (driver="docker")
	I1025 10:32:26.302055  481784 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:32:26.302131  481784 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:32:26.302169  481784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-419185
	I1025 10:32:26.319676  481784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33437 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/embed-certs-419185/id_rsa Username:docker}
	I1025 10:32:26.427999  481784 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:32:26.431526  481784 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:32:26.431554  481784 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:32:26.431565  481784 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/addons for local assets ...
	I1025 10:32:26.431619  481784 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/files for local assets ...
	I1025 10:32:26.431697  481784 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem -> 2940172.pem in /etc/ssl/certs
	I1025 10:32:26.431803  481784 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:32:26.439628  481784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 10:32:26.457286  481784 start.go:296] duration metric: took 155.225923ms for postStartSetup
	I1025 10:32:26.457671  481784 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-419185
	I1025 10:32:26.475573  481784 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/config.json ...
	I1025 10:32:26.475857  481784 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:32:26.475900  481784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-419185
	I1025 10:32:26.498776  481784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33437 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/embed-certs-419185/id_rsa Username:docker}
	I1025 10:32:26.600519  481784 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:32:26.605443  481784 start.go:128] duration metric: took 11.265346113s to createHost
	I1025 10:32:26.605469  481784 start.go:83] releasing machines lock for "embed-certs-419185", held for 11.26548415s
	I1025 10:32:26.605547  481784 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-419185
	I1025 10:32:26.622420  481784 ssh_runner.go:195] Run: cat /version.json
	I1025 10:32:26.622472  481784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-419185
	I1025 10:32:26.622839  481784 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:32:26.622902  481784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-419185
	I1025 10:32:26.645036  481784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33437 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/embed-certs-419185/id_rsa Username:docker}
	I1025 10:32:26.645846  481784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33437 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/embed-certs-419185/id_rsa Username:docker}
	I1025 10:32:26.750989  481784 ssh_runner.go:195] Run: systemctl --version
	I1025 10:32:26.840815  481784 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:32:26.876361  481784 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:32:26.880820  481784 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:32:26.880920  481784 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:32:26.912805  481784 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1025 10:32:26.912831  481784 start.go:495] detecting cgroup driver to use...
	I1025 10:32:26.912889  481784 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:32:26.912971  481784 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:32:26.931094  481784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:32:26.944023  481784 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:32:26.944117  481784 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:32:26.961169  481784 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:32:26.980119  481784 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:32:27.102844  481784 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:32:27.235044  481784 docker.go:234] disabling docker service ...
	I1025 10:32:27.235111  481784 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:32:27.258782  481784 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:32:27.275232  481784 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:32:27.425020  481784 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:32:27.555009  481784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:32:27.568897  481784 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:32:27.583562  481784 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:32:27.583676  481784 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:32:27.592359  481784 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:32:27.592454  481784 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:32:27.601451  481784 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:32:27.610800  481784 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:32:27.619691  481784 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:32:27.629151  481784 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:32:27.638888  481784 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:32:27.653950  481784 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:32:27.668427  481784 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:32:27.677212  481784 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:32:27.684697  481784 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:32:27.798759  481784 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:32:27.932230  481784 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:32:27.932301  481784 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:32:27.936194  481784 start.go:563] Will wait 60s for crictl version
	I1025 10:32:27.936257  481784 ssh_runner.go:195] Run: which crictl
	I1025 10:32:27.939807  481784 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:32:27.966947  481784 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:32:27.967042  481784 ssh_runner.go:195] Run: crio --version
	I1025 10:32:28.003832  481784 ssh_runner.go:195] Run: crio --version
	I1025 10:32:28.041976  481784 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1025 10:32:24.668185  478351 node_ready.go:57] node "default-k8s-diff-port-204074" has "Ready":"False" status (will retry)
	W1025 10:32:26.671385  478351 node_ready.go:57] node "default-k8s-diff-port-204074" has "Ready":"False" status (will retry)
	W1025 10:32:28.671688  478351 node_ready.go:57] node "default-k8s-diff-port-204074" has "Ready":"False" status (will retry)
	I1025 10:32:28.044918  481784 cli_runner.go:164] Run: docker network inspect embed-certs-419185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:32:28.063237  481784 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1025 10:32:28.067277  481784 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:32:28.077991  481784 kubeadm.go:883] updating cluster {Name:embed-certs-419185 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-419185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:32:28.078108  481784 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:32:28.078172  481784 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:32:28.115114  481784 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:32:28.115136  481784 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:32:28.115244  481784 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:32:28.144035  481784 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:32:28.144060  481784 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:32:28.144075  481784 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1025 10:32:28.144221  481784 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-419185 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-419185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:32:28.144312  481784 ssh_runner.go:195] Run: crio config
	I1025 10:32:28.200727  481784 cni.go:84] Creating CNI manager for ""
	I1025 10:32:28.200751  481784 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:32:28.200771  481784 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:32:28.200821  481784 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-419185 NodeName:embed-certs-419185 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:32:28.201082  481784 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-419185"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:32:28.201171  481784 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:32:28.209171  481784 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:32:28.209296  481784 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:32:28.217379  481784 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1025 10:32:28.231215  481784 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:32:28.251838  481784 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1025 10:32:28.265471  481784 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:32:28.269282  481784 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:32:28.279941  481784 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:32:28.400192  481784 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:32:28.415779  481784 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185 for IP: 192.168.76.2
	I1025 10:32:28.415801  481784 certs.go:195] generating shared ca certs ...
	I1025 10:32:28.415817  481784 certs.go:227] acquiring lock for ca certs: {Name:mk9438893ca511bbb3aa6154ee7b6f94b409696d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:32:28.415982  481784 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key
	I1025 10:32:28.416045  481784 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key
	I1025 10:32:28.416057  481784 certs.go:257] generating profile certs ...
	I1025 10:32:28.416131  481784 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/client.key
	I1025 10:32:28.416149  481784 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/client.crt with IP's: []
	I1025 10:32:29.632889  481784 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/client.crt ...
	I1025 10:32:29.632922  481784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/client.crt: {Name:mk6ecaa8c1671c47be0c4185cbd70dac2311886f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:32:29.633130  481784 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/client.key ...
	I1025 10:32:29.633145  481784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/client.key: {Name:mka6804c44645cb131bdf458ec04cd42b433e5ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:32:29.633247  481784 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/apiserver.key.627d90fe
	I1025 10:32:29.633269  481784 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/apiserver.crt.627d90fe with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1025 10:32:29.863812  481784 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/apiserver.crt.627d90fe ...
	I1025 10:32:29.863845  481784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/apiserver.crt.627d90fe: {Name:mk2fae4c2c8ab8fd4a5e0af654d7cc347a7ccab0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:32:29.864052  481784 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/apiserver.key.627d90fe ...
	I1025 10:32:29.864074  481784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/apiserver.key.627d90fe: {Name:mk7f8cd637dcded011581f3578fafbba4d9c1fd4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:32:29.864169  481784 certs.go:382] copying /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/apiserver.crt.627d90fe -> /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/apiserver.crt
	I1025 10:32:29.864252  481784 certs.go:386] copying /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/apiserver.key.627d90fe -> /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/apiserver.key
	I1025 10:32:29.864312  481784 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/proxy-client.key
	I1025 10:32:29.864330  481784 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/proxy-client.crt with IP's: []
	I1025 10:32:30.124681  481784 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/proxy-client.crt ...
	I1025 10:32:30.124727  481784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/proxy-client.crt: {Name:mkd5944df347a63b84c32ff56a00704b90dd1c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:32:30.124941  481784 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/proxy-client.key ...
	I1025 10:32:30.124958  481784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/proxy-client.key: {Name:mkd3d558495b37171112f4b7b3be25a82c1c9be1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:32:30.125164  481784 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem (1338 bytes)
	W1025 10:32:30.125212  481784 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017_empty.pem, impossibly tiny 0 bytes
	I1025 10:32:30.125226  481784 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 10:32:30.125256  481784 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem (1082 bytes)
	I1025 10:32:30.125283  481784 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:32:30.125313  481784 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem (1675 bytes)
	I1025 10:32:30.125358  481784 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 10:32:30.126906  481784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:32:30.155830  481784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:32:30.181313  481784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:32:30.204653  481784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:32:30.222733  481784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1025 10:32:30.250114  481784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:32:30.267615  481784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:32:30.285971  481784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 10:32:30.303665  481784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:32:30.323189  481784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem --> /usr/share/ca-certificates/294017.pem (1338 bytes)
	I1025 10:32:30.342492  481784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /usr/share/ca-certificates/2940172.pem (1708 bytes)
	I1025 10:32:30.361505  481784 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:32:30.374688  481784 ssh_runner.go:195] Run: openssl version
	I1025 10:32:30.380740  481784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294017.pem && ln -fs /usr/share/ca-certificates/294017.pem /etc/ssl/certs/294017.pem"
	I1025 10:32:30.388967  481784 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294017.pem
	I1025 10:32:30.393330  481784 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:39 /usr/share/ca-certificates/294017.pem
	I1025 10:32:30.393448  481784 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294017.pem
	I1025 10:32:30.434104  481784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294017.pem /etc/ssl/certs/51391683.0"
	I1025 10:32:30.443255  481784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940172.pem && ln -fs /usr/share/ca-certificates/2940172.pem /etc/ssl/certs/2940172.pem"
	I1025 10:32:30.451130  481784 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940172.pem
	I1025 10:32:30.454632  481784 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:39 /usr/share/ca-certificates/2940172.pem
	I1025 10:32:30.454723  481784 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940172.pem
	I1025 10:32:30.495657  481784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940172.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:32:30.504239  481784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:32:30.512196  481784 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:32:30.515663  481784 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:32:30.515727  481784 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:32:30.556456  481784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:32:30.564774  481784 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:32:30.568303  481784 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 10:32:30.568353  481784 kubeadm.go:400] StartCluster: {Name:embed-certs-419185 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-419185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:32:30.568441  481784 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:32:30.568502  481784 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:32:30.597298  481784 cri.go:89] found id: ""
	I1025 10:32:30.597419  481784 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:32:30.605146  481784 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 10:32:30.612666  481784 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 10:32:30.612758  481784 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 10:32:30.620569  481784 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 10:32:30.620590  481784 kubeadm.go:157] found existing configuration files:
	
	I1025 10:32:30.620654  481784 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 10:32:30.628348  481784 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 10:32:30.628463  481784 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 10:32:30.635723  481784 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 10:32:30.643309  481784 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 10:32:30.643423  481784 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 10:32:30.650969  481784 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 10:32:30.658182  481784 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 10:32:30.658273  481784 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 10:32:30.668743  481784 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 10:32:30.678728  481784 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 10:32:30.678812  481784 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 10:32:30.687005  481784 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 10:32:30.727809  481784 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 10:32:30.728115  481784 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 10:32:30.758972  481784 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 10:32:30.759078  481784 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1025 10:32:30.759137  481784 kubeadm.go:318] OS: Linux
	I1025 10:32:30.759237  481784 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 10:32:30.759318  481784 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1025 10:32:30.759387  481784 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 10:32:30.759460  481784 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 10:32:30.759533  481784 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 10:32:30.759607  481784 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 10:32:30.759677  481784 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 10:32:30.759757  481784 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 10:32:30.759829  481784 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1025 10:32:30.846476  481784 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 10:32:30.846591  481784 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 10:32:30.846695  481784 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 10:32:30.855614  481784 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1025 10:32:31.168291  478351 node_ready.go:57] node "default-k8s-diff-port-204074" has "Ready":"False" status (will retry)
	W1025 10:32:33.672032  478351 node_ready.go:57] node "default-k8s-diff-port-204074" has "Ready":"False" status (will retry)
	I1025 10:32:30.861682  481784 out.go:252]   - Generating certificates and keys ...
	I1025 10:32:30.861876  481784 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 10:32:30.861986  481784 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 10:32:31.236284  481784 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 10:32:32.008572  481784 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 10:32:32.194336  481784 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 10:32:32.751986  481784 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 10:32:33.140555  481784 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 10:32:33.140989  481784 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-419185 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 10:32:33.740502  481784 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 10:32:33.740861  481784 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-419185 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 10:32:34.096018  481784 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 10:32:34.493827  481784 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 10:32:35.187613  481784 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 10:32:35.187936  481784 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 10:32:35.903327  481784 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 10:32:36.454515  481784 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 10:32:36.712783  481784 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 10:32:38.079940  481784 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 10:32:38.719063  481784 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 10:32:38.720018  481784 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 10:32:38.723049  481784 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1025 10:32:36.167127  478351 node_ready.go:57] node "default-k8s-diff-port-204074" has "Ready":"False" status (will retry)
	W1025 10:32:38.683192  478351 node_ready.go:57] node "default-k8s-diff-port-204074" has "Ready":"False" status (will retry)
	I1025 10:32:38.726747  481784 out.go:252]   - Booting up control plane ...
	I1025 10:32:38.726867  481784 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 10:32:38.726958  481784 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 10:32:38.727034  481784 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 10:32:38.743489  481784 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 10:32:38.743947  481784 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 10:32:38.751999  481784 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 10:32:38.753136  481784 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 10:32:38.753195  481784 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 10:32:38.899762  481784 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 10:32:38.899890  481784 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1025 10:32:41.166281  478351 node_ready.go:57] node "default-k8s-diff-port-204074" has "Ready":"False" status (will retry)
	W1025 10:32:43.166378  478351 node_ready.go:57] node "default-k8s-diff-port-204074" has "Ready":"False" status (will retry)
	I1025 10:32:40.399219  481784 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501798587s
	I1025 10:32:40.402990  481784 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 10:32:40.403427  481784 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1025 10:32:40.403760  481784 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 10:32:40.403861  481784 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 10:32:42.449523  481784 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.046114372s
	I1025 10:32:44.426534  481784 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.023477418s
	I1025 10:32:46.408777  481784 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.00568754s
	I1025 10:32:46.431368  481784 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 10:32:46.448953  481784 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 10:32:46.465989  481784 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 10:32:46.466207  481784 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-419185 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 10:32:46.483267  481784 kubeadm.go:318] [bootstrap-token] Using token: beoinq.cc0b19b723jdlyfx
	I1025 10:32:46.486212  481784 out.go:252]   - Configuring RBAC rules ...
	I1025 10:32:46.486345  481784 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 10:32:46.500798  481784 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 10:32:46.509582  481784 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 10:32:46.516172  481784 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 10:32:46.520595  481784 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 10:32:46.525241  481784 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 10:32:46.816784  481784 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 10:32:47.279591  481784 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 10:32:47.815437  481784 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 10:32:47.816838  481784 kubeadm.go:318] 
	I1025 10:32:47.816915  481784 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 10:32:47.816921  481784 kubeadm.go:318] 
	I1025 10:32:47.817001  481784 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 10:32:47.817006  481784 kubeadm.go:318] 
	I1025 10:32:47.817032  481784 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 10:32:47.817094  481784 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 10:32:47.817146  481784 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 10:32:47.817151  481784 kubeadm.go:318] 
	I1025 10:32:47.817207  481784 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 10:32:47.817212  481784 kubeadm.go:318] 
	I1025 10:32:47.817262  481784 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 10:32:47.817266  481784 kubeadm.go:318] 
	I1025 10:32:47.817320  481784 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 10:32:47.817398  481784 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 10:32:47.817470  481784 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 10:32:47.817486  481784 kubeadm.go:318] 
	I1025 10:32:47.817577  481784 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 10:32:47.817658  481784 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 10:32:47.817667  481784 kubeadm.go:318] 
	I1025 10:32:47.817755  481784 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token beoinq.cc0b19b723jdlyfx \
	I1025 10:32:47.817863  481784 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3fbde57ac93e639221cdfe118779e4c6254e9915ac4b07aafbe95a9f86bce8d0 \
	I1025 10:32:47.817884  481784 kubeadm.go:318] 	--control-plane 
	I1025 10:32:47.817888  481784 kubeadm.go:318] 
	I1025 10:32:47.817977  481784 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 10:32:47.817981  481784 kubeadm.go:318] 
	I1025 10:32:47.818067  481784 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token beoinq.cc0b19b723jdlyfx \
	I1025 10:32:47.818173  481784 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3fbde57ac93e639221cdfe118779e4c6254e9915ac4b07aafbe95a9f86bce8d0 
	I1025 10:32:47.823011  481784 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1025 10:32:47.823274  481784 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1025 10:32:47.823384  481784 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 10:32:47.823399  481784 cni.go:84] Creating CNI manager for ""
	I1025 10:32:47.823408  481784 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:32:47.826547  481784 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1025 10:32:45.182444  478351 node_ready.go:57] node "default-k8s-diff-port-204074" has "Ready":"False" status (will retry)
	W1025 10:32:47.667913  478351 node_ready.go:57] node "default-k8s-diff-port-204074" has "Ready":"False" status (will retry)
	I1025 10:32:47.829544  481784 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 10:32:47.834074  481784 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 10:32:47.834105  481784 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 10:32:47.853548  481784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 10:32:48.179628  481784 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 10:32:48.179748  481784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:32:48.179782  481784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-419185 minikube.k8s.io/updated_at=2025_10_25T10_32_48_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53 minikube.k8s.io/name=embed-certs-419185 minikube.k8s.io/primary=true
	I1025 10:32:48.201839  481784 ops.go:34] apiserver oom_adj: -16
	I1025 10:32:48.353871  481784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:32:48.853994  481784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:32:49.353939  481784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:32:49.854711  481784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:32:50.354539  481784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:32:50.854887  481784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:32:51.354643  481784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:32:51.854237  481784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:32:51.944345  481784 kubeadm.go:1113] duration metric: took 3.764706001s to wait for elevateKubeSystemPrivileges
	I1025 10:32:51.944379  481784 kubeadm.go:402] duration metric: took 21.376023702s to StartCluster
	I1025 10:32:51.944399  481784 settings.go:142] acquiring lock: {Name:mkea67a33832f2ea491a4c745ccd836174c61655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:32:51.944467  481784 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:32:51.945775  481784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/kubeconfig: {Name:mk3dba620aff9c31a3afd43d9db4d9ff5be75367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:32:51.945993  481784 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:32:51.946142  481784 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 10:32:51.946348  481784 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:32:51.946417  481784 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-419185"
	I1025 10:32:51.946432  481784 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-419185"
	I1025 10:32:51.946453  481784 host.go:66] Checking if "embed-certs-419185" exists ...
	I1025 10:32:51.946960  481784 cli_runner.go:164] Run: docker container inspect embed-certs-419185 --format={{.State.Status}}
	I1025 10:32:51.947396  481784 config.go:182] Loaded profile config "embed-certs-419185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:32:51.947478  481784 addons.go:69] Setting default-storageclass=true in profile "embed-certs-419185"
	I1025 10:32:51.947538  481784 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-419185"
	I1025 10:32:51.947824  481784 cli_runner.go:164] Run: docker container inspect embed-certs-419185 --format={{.State.Status}}
	I1025 10:32:51.950592  481784 out.go:179] * Verifying Kubernetes components...
	I1025 10:32:51.953781  481784 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:32:52.006127  481784 addons.go:238] Setting addon default-storageclass=true in "embed-certs-419185"
	I1025 10:32:52.006197  481784 host.go:66] Checking if "embed-certs-419185" exists ...
	I1025 10:32:52.006726  481784 cli_runner.go:164] Run: docker container inspect embed-certs-419185 --format={{.State.Status}}
	I1025 10:32:52.007370  481784 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:32:52.010168  481784 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:32:52.010201  481784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:32:52.010273  481784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-419185
	I1025 10:32:52.038146  481784 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:32:52.038171  481784 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:32:52.038245  481784 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-419185
	I1025 10:32:52.054434  481784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33437 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/embed-certs-419185/id_rsa Username:docker}
	I1025 10:32:52.080672  481784 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33437 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/embed-certs-419185/id_rsa Username:docker}
	I1025 10:32:52.292368  481784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:32:52.377245  481784 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 10:32:52.377349  481784 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:32:52.460206  481784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:32:52.922541  481784 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1025 10:32:52.925079  481784 node_ready.go:35] waiting up to 6m0s for node "embed-certs-419185" to be "Ready" ...
	I1025 10:32:53.187045  481784 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1025 10:32:50.166814  478351 node_ready.go:57] node "default-k8s-diff-port-204074" has "Ready":"False" status (will retry)
	W1025 10:32:52.673960  478351 node_ready.go:57] node "default-k8s-diff-port-204074" has "Ready":"False" status (will retry)
	I1025 10:32:53.189156  481784 addons.go:514] duration metric: took 1.242791726s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1025 10:32:53.427572  481784 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-419185" context rescaled to 1 replicas
	W1025 10:32:54.928363  481784 node_ready.go:57] node "embed-certs-419185" has "Ready":"False" status (will retry)
	W1025 10:32:55.166338  478351 node_ready.go:57] node "default-k8s-diff-port-204074" has "Ready":"False" status (will retry)
	W1025 10:32:57.166789  478351 node_ready.go:57] node "default-k8s-diff-port-204074" has "Ready":"False" status (will retry)
	W1025 10:32:59.167021  478351 node_ready.go:57] node "default-k8s-diff-port-204074" has "Ready":"False" status (will retry)
	W1025 10:32:57.428127  481784 node_ready.go:57] node "embed-certs-419185" has "Ready":"False" status (will retry)
	W1025 10:32:59.927907  481784 node_ready.go:57] node "embed-certs-419185" has "Ready":"False" status (will retry)
	I1025 10:33:00.667599  478351 node_ready.go:49] node "default-k8s-diff-port-204074" is "Ready"
	I1025 10:33:00.667631  478351 node_ready.go:38] duration metric: took 40.004157799s for node "default-k8s-diff-port-204074" to be "Ready" ...
	I1025 10:33:00.667645  478351 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:33:00.667711  478351 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:33:00.679382  478351 api_server.go:72] duration metric: took 41.770719145s to wait for apiserver process to appear ...
	I1025 10:33:00.679410  478351 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:33:00.679430  478351 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1025 10:33:00.687840  478351 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1025 10:33:00.688915  478351 api_server.go:141] control plane version: v1.34.1
	I1025 10:33:00.688946  478351 api_server.go:131] duration metric: took 9.521637ms to wait for apiserver health ...
	I1025 10:33:00.688956  478351 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:33:00.692561  478351 system_pods.go:59] 8 kube-system pods found
	I1025 10:33:00.692599  478351 system_pods.go:61] "coredns-66bc5c9577-hwczp" [76f57b6b-0326-45da-9fb3-89258d0b3cd7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:33:00.692606  478351 system_pods.go:61] "etcd-default-k8s-diff-port-204074" [f50bb999-aa6a-400e-9873-cf0638cd4600] Running
	I1025 10:33:00.692611  478351 system_pods.go:61] "kindnet-pt5xf" [58092077-ef5b-4a3a-ac91-d90b746fb830] Running
	I1025 10:33:00.692616  478351 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-204074" [aadf7462-83f7-49c2-952e-c8e4f2b1e744] Running
	I1025 10:33:00.692621  478351 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-204074" [16d8de03-a05e-43f7-a7f3-0c40609270b7] Running
	I1025 10:33:00.692625  478351 system_pods.go:61] "kube-proxy-qcgkj" [8c1000b6-fabf-48f5-add5-0ff29481b2cd] Running
	I1025 10:33:00.692630  478351 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-204074" [02e676ca-6d73-4ba5-aa35-01d99d296133] Running
	I1025 10:33:00.692636  478351 system_pods.go:61] "storage-provisioner" [7dcb2226-851a-46f1-8af1-0e796e81167c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:33:00.692647  478351 system_pods.go:74] duration metric: took 3.684551ms to wait for pod list to return data ...
	I1025 10:33:00.692664  478351 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:33:00.695348  478351 default_sa.go:45] found service account: "default"
	I1025 10:33:00.695371  478351 default_sa.go:55] duration metric: took 2.700645ms for default service account to be created ...
	I1025 10:33:00.695381  478351 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:33:00.698932  478351 system_pods.go:86] 8 kube-system pods found
	I1025 10:33:00.698969  478351 system_pods.go:89] "coredns-66bc5c9577-hwczp" [76f57b6b-0326-45da-9fb3-89258d0b3cd7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:33:00.698977  478351 system_pods.go:89] "etcd-default-k8s-diff-port-204074" [f50bb999-aa6a-400e-9873-cf0638cd4600] Running
	I1025 10:33:00.698984  478351 system_pods.go:89] "kindnet-pt5xf" [58092077-ef5b-4a3a-ac91-d90b746fb830] Running
	I1025 10:33:00.698994  478351 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-204074" [aadf7462-83f7-49c2-952e-c8e4f2b1e744] Running
	I1025 10:33:00.699003  478351 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-204074" [16d8de03-a05e-43f7-a7f3-0c40609270b7] Running
	I1025 10:33:00.699008  478351 system_pods.go:89] "kube-proxy-qcgkj" [8c1000b6-fabf-48f5-add5-0ff29481b2cd] Running
	I1025 10:33:00.699015  478351 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-204074" [02e676ca-6d73-4ba5-aa35-01d99d296133] Running
	I1025 10:33:00.699025  478351 system_pods.go:89] "storage-provisioner" [7dcb2226-851a-46f1-8af1-0e796e81167c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:33:00.699050  478351 retry.go:31] will retry after 225.562001ms: missing components: kube-dns
	I1025 10:33:00.934214  478351 system_pods.go:86] 8 kube-system pods found
	I1025 10:33:00.934303  478351 system_pods.go:89] "coredns-66bc5c9577-hwczp" [76f57b6b-0326-45da-9fb3-89258d0b3cd7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:33:00.934326  478351 system_pods.go:89] "etcd-default-k8s-diff-port-204074" [f50bb999-aa6a-400e-9873-cf0638cd4600] Running
	I1025 10:33:00.934366  478351 system_pods.go:89] "kindnet-pt5xf" [58092077-ef5b-4a3a-ac91-d90b746fb830] Running
	I1025 10:33:00.934388  478351 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-204074" [aadf7462-83f7-49c2-952e-c8e4f2b1e744] Running
	I1025 10:33:00.934407  478351 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-204074" [16d8de03-a05e-43f7-a7f3-0c40609270b7] Running
	I1025 10:33:00.934428  478351 system_pods.go:89] "kube-proxy-qcgkj" [8c1000b6-fabf-48f5-add5-0ff29481b2cd] Running
	I1025 10:33:00.934469  478351 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-204074" [02e676ca-6d73-4ba5-aa35-01d99d296133] Running
	I1025 10:33:00.934496  478351 system_pods.go:89] "storage-provisioner" [7dcb2226-851a-46f1-8af1-0e796e81167c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:33:00.934528  478351 retry.go:31] will retry after 316.847232ms: missing components: kube-dns
	I1025 10:33:01.256093  478351 system_pods.go:86] 8 kube-system pods found
	I1025 10:33:01.256129  478351 system_pods.go:89] "coredns-66bc5c9577-hwczp" [76f57b6b-0326-45da-9fb3-89258d0b3cd7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:33:01.256137  478351 system_pods.go:89] "etcd-default-k8s-diff-port-204074" [f50bb999-aa6a-400e-9873-cf0638cd4600] Running
	I1025 10:33:01.256165  478351 system_pods.go:89] "kindnet-pt5xf" [58092077-ef5b-4a3a-ac91-d90b746fb830] Running
	I1025 10:33:01.256179  478351 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-204074" [aadf7462-83f7-49c2-952e-c8e4f2b1e744] Running
	I1025 10:33:01.256184  478351 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-204074" [16d8de03-a05e-43f7-a7f3-0c40609270b7] Running
	I1025 10:33:01.256192  478351 system_pods.go:89] "kube-proxy-qcgkj" [8c1000b6-fabf-48f5-add5-0ff29481b2cd] Running
	I1025 10:33:01.256205  478351 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-204074" [02e676ca-6d73-4ba5-aa35-01d99d296133] Running
	I1025 10:33:01.256211  478351 system_pods.go:89] "storage-provisioner" [7dcb2226-851a-46f1-8af1-0e796e81167c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:33:01.256235  478351 retry.go:31] will retry after 384.302327ms: missing components: kube-dns
	I1025 10:33:01.644366  478351 system_pods.go:86] 8 kube-system pods found
	I1025 10:33:01.644401  478351 system_pods.go:89] "coredns-66bc5c9577-hwczp" [76f57b6b-0326-45da-9fb3-89258d0b3cd7] Running
	I1025 10:33:01.644409  478351 system_pods.go:89] "etcd-default-k8s-diff-port-204074" [f50bb999-aa6a-400e-9873-cf0638cd4600] Running
	I1025 10:33:01.644414  478351 system_pods.go:89] "kindnet-pt5xf" [58092077-ef5b-4a3a-ac91-d90b746fb830] Running
	I1025 10:33:01.644450  478351 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-204074" [aadf7462-83f7-49c2-952e-c8e4f2b1e744] Running
	I1025 10:33:01.644461  478351 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-204074" [16d8de03-a05e-43f7-a7f3-0c40609270b7] Running
	I1025 10:33:01.644469  478351 system_pods.go:89] "kube-proxy-qcgkj" [8c1000b6-fabf-48f5-add5-0ff29481b2cd] Running
	I1025 10:33:01.644473  478351 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-204074" [02e676ca-6d73-4ba5-aa35-01d99d296133] Running
	I1025 10:33:01.644477  478351 system_pods.go:89] "storage-provisioner" [7dcb2226-851a-46f1-8af1-0e796e81167c] Running
	I1025 10:33:01.644490  478351 system_pods.go:126] duration metric: took 949.103619ms to wait for k8s-apps to be running ...
	I1025 10:33:01.644504  478351 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:33:01.644580  478351 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:33:01.658666  478351 system_svc.go:56] duration metric: took 14.154099ms WaitForService to wait for kubelet
	I1025 10:33:01.658698  478351 kubeadm.go:586] duration metric: took 42.750039185s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:33:01.658716  478351 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:33:01.662299  478351 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:33:01.662330  478351 node_conditions.go:123] node cpu capacity is 2
	I1025 10:33:01.662344  478351 node_conditions.go:105] duration metric: took 3.622962ms to run NodePressure ...
	I1025 10:33:01.662381  478351 start.go:241] waiting for startup goroutines ...
	I1025 10:33:01.662396  478351 start.go:246] waiting for cluster config update ...
	I1025 10:33:01.662408  478351 start.go:255] writing updated cluster config ...
	I1025 10:33:01.662716  478351 ssh_runner.go:195] Run: rm -f paused
	I1025 10:33:01.667760  478351 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:33:01.672104  478351 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hwczp" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:33:01.676815  478351 pod_ready.go:94] pod "coredns-66bc5c9577-hwczp" is "Ready"
	I1025 10:33:01.676882  478351 pod_ready.go:86] duration metric: took 4.748925ms for pod "coredns-66bc5c9577-hwczp" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:33:01.679206  478351 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-204074" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:33:01.683859  478351 pod_ready.go:94] pod "etcd-default-k8s-diff-port-204074" is "Ready"
	I1025 10:33:01.683930  478351 pod_ready.go:86] duration metric: took 4.697388ms for pod "etcd-default-k8s-diff-port-204074" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:33:01.686360  478351 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-204074" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:33:01.691003  478351 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-204074" is "Ready"
	I1025 10:33:01.691031  478351 pod_ready.go:86] duration metric: took 4.645786ms for pod "kube-apiserver-default-k8s-diff-port-204074" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:33:01.693417  478351 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-204074" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:33:02.072880  478351 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-204074" is "Ready"
	I1025 10:33:02.072918  478351 pod_ready.go:86] duration metric: took 379.473198ms for pod "kube-controller-manager-default-k8s-diff-port-204074" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:33:02.272057  478351 pod_ready.go:83] waiting for pod "kube-proxy-qcgkj" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:33:02.672057  478351 pod_ready.go:94] pod "kube-proxy-qcgkj" is "Ready"
	I1025 10:33:02.672096  478351 pod_ready.go:86] duration metric: took 399.994564ms for pod "kube-proxy-qcgkj" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:33:02.872428  478351 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-204074" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:33:03.272687  478351 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-204074" is "Ready"
	I1025 10:33:03.272719  478351 pod_ready.go:86] duration metric: took 400.263859ms for pod "kube-scheduler-default-k8s-diff-port-204074" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:33:03.272733  478351 pod_ready.go:40] duration metric: took 1.604891373s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:33:03.328790  478351 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 10:33:03.334007  478351 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-204074" cluster and "default" namespace by default
	W1025 10:33:01.930912  481784 node_ready.go:57] node "embed-certs-419185" has "Ready":"False" status (will retry)
	W1025 10:33:04.427884  481784 node_ready.go:57] node "embed-certs-419185" has "Ready":"False" status (will retry)
	W1025 10:33:06.427922  481784 node_ready.go:57] node "embed-certs-419185" has "Ready":"False" status (will retry)
	W1025 10:33:08.428112  481784 node_ready.go:57] node "embed-certs-419185" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 25 10:33:00 default-k8s-diff-port-204074 crio[840]: time="2025-10-25T10:33:00.839409647Z" level=info msg="Created container 121ac30775fab25631e08ca2deeab5606c2a584526da22c4f95e7ebbf2c0de21: kube-system/coredns-66bc5c9577-hwczp/coredns" id=6cf2122a-3d5f-4d1d-8f1a-f19e382a28a4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:33:00 default-k8s-diff-port-204074 crio[840]: time="2025-10-25T10:33:00.841734591Z" level=info msg="Starting container: 121ac30775fab25631e08ca2deeab5606c2a584526da22c4f95e7ebbf2c0de21" id=8797443d-bc0e-42f3-af73-614f9d9d26e4 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:33:00 default-k8s-diff-port-204074 crio[840]: time="2025-10-25T10:33:00.844263895Z" level=info msg="Started container" PID=1724 containerID=121ac30775fab25631e08ca2deeab5606c2a584526da22c4f95e7ebbf2c0de21 description=kube-system/coredns-66bc5c9577-hwczp/coredns id=8797443d-bc0e-42f3-af73-614f9d9d26e4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=14a662a0627659056ab378b62f2284decefb14af38d1788bfc7cd1d42bdf836c
	Oct 25 10:33:03 default-k8s-diff-port-204074 crio[840]: time="2025-10-25T10:33:03.863651541Z" level=info msg="Running pod sandbox: default/busybox/POD" id=a7056c82-568e-410f-a706-5c9ecf463bbf name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:33:03 default-k8s-diff-port-204074 crio[840]: time="2025-10-25T10:33:03.863715911Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:33:03 default-k8s-diff-port-204074 crio[840]: time="2025-10-25T10:33:03.875837565Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:81bdf9eb418528b27dda286639f92e6b765d5d2c88d654c5bca67ae87b6b6124 UID:b7d57fd1-b5eb-4724-9e1e-54fb753ba7cc NetNS:/var/run/netns/18d9a654-9ed0-451b-81e6-8efd6914b4ae Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079478}] Aliases:map[]}"
	Oct 25 10:33:03 default-k8s-diff-port-204074 crio[840]: time="2025-10-25T10:33:03.875995557Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 25 10:33:03 default-k8s-diff-port-204074 crio[840]: time="2025-10-25T10:33:03.891115485Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:81bdf9eb418528b27dda286639f92e6b765d5d2c88d654c5bca67ae87b6b6124 UID:b7d57fd1-b5eb-4724-9e1e-54fb753ba7cc NetNS:/var/run/netns/18d9a654-9ed0-451b-81e6-8efd6914b4ae Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079478}] Aliases:map[]}"
	Oct 25 10:33:03 default-k8s-diff-port-204074 crio[840]: time="2025-10-25T10:33:03.891988374Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 25 10:33:03 default-k8s-diff-port-204074 crio[840]: time="2025-10-25T10:33:03.896545887Z" level=info msg="Ran pod sandbox 81bdf9eb418528b27dda286639f92e6b765d5d2c88d654c5bca67ae87b6b6124 with infra container: default/busybox/POD" id=a7056c82-568e-410f-a706-5c9ecf463bbf name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:33:03 default-k8s-diff-port-204074 crio[840]: time="2025-10-25T10:33:03.897810486Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b45594ef-464e-4fa9-90aa-554139ded2a7 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:33:03 default-k8s-diff-port-204074 crio[840]: time="2025-10-25T10:33:03.898072093Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=b45594ef-464e-4fa9-90aa-554139ded2a7 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:33:03 default-k8s-diff-port-204074 crio[840]: time="2025-10-25T10:33:03.898178876Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=b45594ef-464e-4fa9-90aa-554139ded2a7 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:33:03 default-k8s-diff-port-204074 crio[840]: time="2025-10-25T10:33:03.899491393Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=fafbf89d-5487-4dda-b65a-3ebe0d919e32 name=/runtime.v1.ImageService/PullImage
	Oct 25 10:33:03 default-k8s-diff-port-204074 crio[840]: time="2025-10-25T10:33:03.903816018Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 25 10:33:06 default-k8s-diff-port-204074 crio[840]: time="2025-10-25T10:33:06.027627445Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=fafbf89d-5487-4dda-b65a-3ebe0d919e32 name=/runtime.v1.ImageService/PullImage
	Oct 25 10:33:06 default-k8s-diff-port-204074 crio[840]: time="2025-10-25T10:33:06.028431616Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7e0b4901-3070-465f-8f39-116c43dce269 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:33:06 default-k8s-diff-port-204074 crio[840]: time="2025-10-25T10:33:06.03034903Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=681ddcc5-704c-4167-b323-fba811a2e94a name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:33:06 default-k8s-diff-port-204074 crio[840]: time="2025-10-25T10:33:06.035722267Z" level=info msg="Creating container: default/busybox/busybox" id=686307c4-fb9e-442b-a8ba-6ec3af870c2c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:33:06 default-k8s-diff-port-204074 crio[840]: time="2025-10-25T10:33:06.035892099Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:33:06 default-k8s-diff-port-204074 crio[840]: time="2025-10-25T10:33:06.040854172Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:33:06 default-k8s-diff-port-204074 crio[840]: time="2025-10-25T10:33:06.041333948Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:33:06 default-k8s-diff-port-204074 crio[840]: time="2025-10-25T10:33:06.056756124Z" level=info msg="Created container fd3b0964b4e0d7213bbb4bcd9b445b8eb04aab97e78513ef2bc40b5c63062b3d: default/busybox/busybox" id=686307c4-fb9e-442b-a8ba-6ec3af870c2c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:33:06 default-k8s-diff-port-204074 crio[840]: time="2025-10-25T10:33:06.057531403Z" level=info msg="Starting container: fd3b0964b4e0d7213bbb4bcd9b445b8eb04aab97e78513ef2bc40b5c63062b3d" id=554395cc-f96f-44a6-8007-3e3f48f03b98 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:33:06 default-k8s-diff-port-204074 crio[840]: time="2025-10-25T10:33:06.05961088Z" level=info msg="Started container" PID=1784 containerID=fd3b0964b4e0d7213bbb4bcd9b445b8eb04aab97e78513ef2bc40b5c63062b3d description=default/busybox/busybox id=554395cc-f96f-44a6-8007-3e3f48f03b98 name=/runtime.v1.RuntimeService/StartContainer sandboxID=81bdf9eb418528b27dda286639f92e6b765d5d2c88d654c5bca67ae87b6b6124
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	fd3b0964b4e0d       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   81bdf9eb41852       busybox                                                default
	121ac30775fab       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago       Running             coredns                   0                   14a662a062765       coredns-66bc5c9577-hwczp                               kube-system
	c68639a5b2c0f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago       Running             storage-provisioner       0                   96968b55d85c4       storage-provisioner                                    kube-system
	c21e6a8ac2b0d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      53 seconds ago       Running             kube-proxy                0                   a6a7d27e9510e       kube-proxy-qcgkj                                       kube-system
	347b4a0a78050       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      53 seconds ago       Running             kindnet-cni               0                   8634852631b40       kindnet-pt5xf                                          kube-system
	48dec129cb5cd       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   a4ad739ae67bc       kube-apiserver-default-k8s-diff-port-204074            kube-system
	c57067f4d0f91       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   57e8eff78c907       kube-scheduler-default-k8s-diff-port-204074            kube-system
	78d9fc26143d2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   8d586e94ad155       etcd-default-k8s-diff-port-204074                      kube-system
	dae89a6993a5a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   0f89a62e54172       kube-controller-manager-default-k8s-diff-port-204074   kube-system
	
	
	==> coredns [121ac30775fab25631e08ca2deeab5606c2a584526da22c4f95e7ebbf2c0de21] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60074 - 49701 "HINFO IN 1146029295978496035.3579498504482597569. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012091959s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-204074
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-204074
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=default-k8s-diff-port-204074
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_32_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:32:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-204074
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:33:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:33:00 +0000   Sat, 25 Oct 2025 10:32:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:33:00 +0000   Sat, 25 Oct 2025 10:32:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:33:00 +0000   Sat, 25 Oct 2025 10:32:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:33:00 +0000   Sat, 25 Oct 2025 10:33:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-204074
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                fedca12f-f823-4d61-b723-4e847b2985b6
	  Boot ID:                    3729fb0b-b441-4078-a4f7-ae0fb40e9fa4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-hwczp                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55s
	  kube-system                 etcd-default-k8s-diff-port-204074                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         58s
	  kube-system                 kindnet-pt5xf                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-default-k8s-diff-port-204074             250m (12%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-204074    200m (10%)    0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 kube-proxy-qcgkj                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-default-k8s-diff-port-204074             100m (5%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 53s                kube-proxy       
	  Normal   NodeHasSufficientMemory  71s (x8 over 71s)  kubelet          Node default-k8s-diff-port-204074 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    71s (x8 over 71s)  kubelet          Node default-k8s-diff-port-204074 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     71s (x8 over 71s)  kubelet          Node default-k8s-diff-port-204074 status is now: NodeHasSufficientPID
	  Normal   Starting                 59s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 59s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  58s                kubelet          Node default-k8s-diff-port-204074 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    58s                kubelet          Node default-k8s-diff-port-204074 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     58s                kubelet          Node default-k8s-diff-port-204074 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                node-controller  Node default-k8s-diff-port-204074 event: Registered Node default-k8s-diff-port-204074 in Controller
	  Normal   NodeReady                13s                kubelet          Node default-k8s-diff-port-204074 status is now: NodeReady
	
	
	==> dmesg <==
	[ +31.156008] overlayfs: idmapped layers are currently not supported
	[Oct25 10:11] overlayfs: idmapped layers are currently not supported
	[Oct25 10:12] overlayfs: idmapped layers are currently not supported
	[Oct25 10:13] overlayfs: idmapped layers are currently not supported
	[Oct25 10:14] overlayfs: idmapped layers are currently not supported
	[Oct25 10:15] overlayfs: idmapped layers are currently not supported
	[ +16.057450] overlayfs: idmapped layers are currently not supported
	[Oct25 10:16] overlayfs: idmapped layers are currently not supported
	[ +24.917476] overlayfs: idmapped layers are currently not supported
	[Oct25 10:17] overlayfs: idmapped layers are currently not supported
	[ +27.290615] overlayfs: idmapped layers are currently not supported
	[Oct25 10:19] overlayfs: idmapped layers are currently not supported
	[Oct25 10:20] overlayfs: idmapped layers are currently not supported
	[Oct25 10:21] overlayfs: idmapped layers are currently not supported
	[Oct25 10:22] overlayfs: idmapped layers are currently not supported
	[ +31.000692] overlayfs: idmapped layers are currently not supported
	[Oct25 10:24] overlayfs: idmapped layers are currently not supported
	[Oct25 10:26] overlayfs: idmapped layers are currently not supported
	[Oct25 10:27] overlayfs: idmapped layers are currently not supported
	[Oct25 10:28] overlayfs: idmapped layers are currently not supported
	[ +15.077774] overlayfs: idmapped layers are currently not supported
	[Oct25 10:29] overlayfs: idmapped layers are currently not supported
	[Oct25 10:30] overlayfs: idmapped layers are currently not supported
	[Oct25 10:32] overlayfs: idmapped layers are currently not supported
	[ +37.840172] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [78d9fc26143d256df7e6c07a85d47f1ade6f276d4969736c83563dde0a7820df] <==
	{"level":"warn","ts":"2025-10-25T10:32:07.352110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:07.383669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:07.415508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:07.451916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:07.494832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:07.509885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:07.539726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:07.571890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:07.587357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:07.615665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:07.648974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:07.671240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:07.686128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:07.752148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:07.791934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:07.830673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:07.868735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:07.876045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:07.896772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:08.008226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:19.391560Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.929017ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-204074\" limit:1 ","response":"range_response_count:1 size:5039"}
	{"level":"info","ts":"2025-10-25T10:32:19.391732Z","caller":"traceutil/trace.go:172","msg":"trace[76940555] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-204074; range_end:; response_count:1; response_revision:385; }","duration":"101.117287ms","start":"2025-10-25T10:32:19.290601Z","end":"2025-10-25T10:32:19.391718Z","steps":["trace[76940555] 'range keys from in-memory index tree'  (duration: 100.709626ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-25T10:32:19.392529Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.379337ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722596610996762266 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-66bc5c9577-tqqxh.1871b55cbbb127b4\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-66bc5c9577-tqqxh.1871b55cbbb127b4\" value_size:781 lease:499224574141986431 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-10-25T10:32:19.392981Z","caller":"traceutil/trace.go:172","msg":"trace[1907951443] transaction","detail":"{read_only:false; response_revision:386; number_of_response:1; }","duration":"172.628145ms","start":"2025-10-25T10:32:19.220339Z","end":"2025-10-25T10:32:19.392967Z","steps":["trace[1907951443] 'process raft request'  (duration: 70.452982ms)","trace[1907951443] 'compare'  (duration: 100.489308ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-25T10:32:20.948116Z","caller":"traceutil/trace.go:172","msg":"trace[331669593] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"104.364794ms","start":"2025-10-25T10:32:20.843733Z","end":"2025-10-25T10:32:20.948098Z","steps":["trace[331669593] 'process raft request'  (duration: 50.700511ms)","trace[331669593] 'compare'  (duration: 53.236651ms)"],"step_count":2}
	
	
	==> kernel <==
	 10:33:13 up  2:15,  0 user,  load average: 4.38, 3.81, 3.18
	Linux default-k8s-diff-port-204074 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [347b4a0a7805048965c052b58e01f77e03b3312b5b86f20064cf9bce6339b6b8] <==
	I1025 10:32:19.786219       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:32:19.786476       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 10:32:19.786611       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:32:19.786622       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:32:19.786636       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:32:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:32:20.000556       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:32:20.000590       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:32:20.000601       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:32:20.079365       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 10:32:49.989015       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 10:32:49.990387       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 10:32:49.990566       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 10:32:49.997429       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1025 10:32:51.601112       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:32:51.601208       1 metrics.go:72] Registering metrics
	I1025 10:32:51.601275       1 controller.go:711] "Syncing nftables rules"
	I1025 10:32:59.990350       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:32:59.990493       1 main.go:301] handling current node
	I1025 10:33:09.990818       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:33:09.990858       1 main.go:301] handling current node
	
	
	==> kube-apiserver [48dec129cb5cd350e60b22c7d4d48076e9fc200378fa3fb69c5d66c090e85737] <==
	I1025 10:32:09.844667       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 10:32:09.853176       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:32:09.853389       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	E1025 10:32:09.854652       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1025 10:32:09.882544       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:32:09.883078       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 10:32:10.000747       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:32:10.078180       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 10:32:10.112846       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 10:32:10.118039       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:32:12.317159       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:32:12.424766       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:32:12.555239       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 10:32:12.579444       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1025 10:32:12.581043       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:32:12.592642       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:32:12.692883       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:32:14.864244       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:32:14.897951       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 10:32:14.964425       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 10:32:18.486869       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:32:18.493646       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:32:18.545279       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:32:18.693576       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1025 10:33:11.700953       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:44832: use of closed network connection
	
	
	==> kube-controller-manager [dae89a6993a5ac980075ad255d2bb63811776cf03ab2c2dab161ef775f04afc1] <==
	I1025 10:32:17.776322       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 10:32:17.776354       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1025 10:32:17.776625       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 10:32:17.781038       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 10:32:17.781676       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 10:32:17.793646       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 10:32:17.793718       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 10:32:17.793912       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 10:32:17.793978       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 10:32:17.794053       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-204074"
	I1025 10:32:17.794095       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1025 10:32:17.794123       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 10:32:17.794275       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:32:17.797646       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 10:32:17.797672       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 10:32:17.797713       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 10:32:17.802697       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 10:32:17.824711       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1025 10:32:17.825109       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:32:17.825139       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 10:32:17.825145       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 10:32:17.825314       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 10:32:17.831454       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:32:17.831503       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 10:33:02.802346       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [c21e6a8ac2b0d16e083972229c5c55e49f68d12efdbf5e19d97020426c52d648] <==
	I1025 10:32:19.828171       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:32:19.941069       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:32:20.042099       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:32:20.042133       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1025 10:32:20.042231       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:32:20.099570       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:32:20.099643       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:32:20.184049       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:32:20.184398       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:32:20.184413       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:32:20.196461       1 config.go:200] "Starting service config controller"
	I1025 10:32:20.196482       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:32:20.196504       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:32:20.196509       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:32:20.196528       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:32:20.196534       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:32:20.196634       1 config.go:309] "Starting node config controller"
	I1025 10:32:20.196643       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:32:20.196649       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:32:20.297436       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 10:32:20.297474       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 10:32:20.297503       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c57067f4d0f91485034f56774f7fa98047146011f4c544c5620573fde6e68b9f] <==
	I1025 10:32:10.045348       1 serving.go:386] Generated self-signed cert in-memory
	I1025 10:32:13.999619       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 10:32:13.999655       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:32:14.006506       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1025 10:32:14.006556       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1025 10:32:14.006604       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:32:14.006621       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:32:14.006635       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:32:14.006641       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:32:14.015446       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 10:32:14.015525       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 10:32:14.114394       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:32:14.114459       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1025 10:32:14.114548       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:32:16 default-k8s-diff-port-204074 kubelet[1315]: I1025 10:32:16.303908    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-204074" podStartSLOduration=1.303868806 podStartE2EDuration="1.303868806s" podCreationTimestamp="2025-10-25 10:32:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:32:16.284758296 +0000 UTC m=+1.519200949" watchObservedRunningTime="2025-10-25 10:32:16.303868806 +0000 UTC m=+1.538311459"
	Oct 25 10:32:17 default-k8s-diff-port-204074 kubelet[1315]: I1025 10:32:17.804812    1315 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 25 10:32:17 default-k8s-diff-port-204074 kubelet[1315]: I1025 10:32:17.807639    1315 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 25 10:32:18 default-k8s-diff-port-204074 kubelet[1315]: I1025 10:32:18.811067    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58092077-ef5b-4a3a-ac91-d90b746fb830-lib-modules\") pod \"kindnet-pt5xf\" (UID: \"58092077-ef5b-4a3a-ac91-d90b746fb830\") " pod="kube-system/kindnet-pt5xf"
	Oct 25 10:32:18 default-k8s-diff-port-204074 kubelet[1315]: I1025 10:32:18.811114    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q44p6\" (UniqueName: \"kubernetes.io/projected/58092077-ef5b-4a3a-ac91-d90b746fb830-kube-api-access-q44p6\") pod \"kindnet-pt5xf\" (UID: \"58092077-ef5b-4a3a-ac91-d90b746fb830\") " pod="kube-system/kindnet-pt5xf"
	Oct 25 10:32:18 default-k8s-diff-port-204074 kubelet[1315]: I1025 10:32:18.811166    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/58092077-ef5b-4a3a-ac91-d90b746fb830-cni-cfg\") pod \"kindnet-pt5xf\" (UID: \"58092077-ef5b-4a3a-ac91-d90b746fb830\") " pod="kube-system/kindnet-pt5xf"
	Oct 25 10:32:18 default-k8s-diff-port-204074 kubelet[1315]: I1025 10:32:18.811190    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58092077-ef5b-4a3a-ac91-d90b746fb830-xtables-lock\") pod \"kindnet-pt5xf\" (UID: \"58092077-ef5b-4a3a-ac91-d90b746fb830\") " pod="kube-system/kindnet-pt5xf"
	Oct 25 10:32:18 default-k8s-diff-port-204074 kubelet[1315]: I1025 10:32:18.914152    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8c1000b6-fabf-48f5-add5-0ff29481b2cd-kube-proxy\") pod \"kube-proxy-qcgkj\" (UID: \"8c1000b6-fabf-48f5-add5-0ff29481b2cd\") " pod="kube-system/kube-proxy-qcgkj"
	Oct 25 10:32:18 default-k8s-diff-port-204074 kubelet[1315]: I1025 10:32:18.914204    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c1000b6-fabf-48f5-add5-0ff29481b2cd-xtables-lock\") pod \"kube-proxy-qcgkj\" (UID: \"8c1000b6-fabf-48f5-add5-0ff29481b2cd\") " pod="kube-system/kube-proxy-qcgkj"
	Oct 25 10:32:18 default-k8s-diff-port-204074 kubelet[1315]: I1025 10:32:18.914227    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c1000b6-fabf-48f5-add5-0ff29481b2cd-lib-modules\") pod \"kube-proxy-qcgkj\" (UID: \"8c1000b6-fabf-48f5-add5-0ff29481b2cd\") " pod="kube-system/kube-proxy-qcgkj"
	Oct 25 10:32:18 default-k8s-diff-port-204074 kubelet[1315]: I1025 10:32:18.914260    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4mdf\" (UniqueName: \"kubernetes.io/projected/8c1000b6-fabf-48f5-add5-0ff29481b2cd-kube-api-access-j4mdf\") pod \"kube-proxy-qcgkj\" (UID: \"8c1000b6-fabf-48f5-add5-0ff29481b2cd\") " pod="kube-system/kube-proxy-qcgkj"
	Oct 25 10:32:19 default-k8s-diff-port-204074 kubelet[1315]: I1025 10:32:19.132488    1315 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 25 10:32:20 default-k8s-diff-port-204074 kubelet[1315]: I1025 10:32:20.433176    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-pt5xf" podStartSLOduration=2.43314943 podStartE2EDuration="2.43314943s" podCreationTimestamp="2025-10-25 10:32:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:32:20.286499976 +0000 UTC m=+5.520942629" watchObservedRunningTime="2025-10-25 10:32:20.43314943 +0000 UTC m=+5.667592091"
	Oct 25 10:32:21 default-k8s-diff-port-204074 kubelet[1315]: I1025 10:32:21.639381    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qcgkj" podStartSLOduration=3.639358204 podStartE2EDuration="3.639358204s" podCreationTimestamp="2025-10-25 10:32:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:32:20.522772836 +0000 UTC m=+5.757215497" watchObservedRunningTime="2025-10-25 10:32:21.639358204 +0000 UTC m=+6.873800890"
	Oct 25 10:33:00 default-k8s-diff-port-204074 kubelet[1315]: I1025 10:33:00.406412    1315 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 25 10:33:00 default-k8s-diff-port-204074 kubelet[1315]: I1025 10:33:00.613301    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q4s5\" (UniqueName: \"kubernetes.io/projected/7dcb2226-851a-46f1-8af1-0e796e81167c-kube-api-access-7q4s5\") pod \"storage-provisioner\" (UID: \"7dcb2226-851a-46f1-8af1-0e796e81167c\") " pod="kube-system/storage-provisioner"
	Oct 25 10:33:00 default-k8s-diff-port-204074 kubelet[1315]: I1025 10:33:00.613357    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76f57b6b-0326-45da-9fb3-89258d0b3cd7-config-volume\") pod \"coredns-66bc5c9577-hwczp\" (UID: \"76f57b6b-0326-45da-9fb3-89258d0b3cd7\") " pod="kube-system/coredns-66bc5c9577-hwczp"
	Oct 25 10:33:00 default-k8s-diff-port-204074 kubelet[1315]: I1025 10:33:00.613413    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/7dcb2226-851a-46f1-8af1-0e796e81167c-tmp\") pod \"storage-provisioner\" (UID: \"7dcb2226-851a-46f1-8af1-0e796e81167c\") " pod="kube-system/storage-provisioner"
	Oct 25 10:33:00 default-k8s-diff-port-204074 kubelet[1315]: I1025 10:33:00.613430    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5rxg\" (UniqueName: \"kubernetes.io/projected/76f57b6b-0326-45da-9fb3-89258d0b3cd7-kube-api-access-k5rxg\") pod \"coredns-66bc5c9577-hwczp\" (UID: \"76f57b6b-0326-45da-9fb3-89258d0b3cd7\") " pod="kube-system/coredns-66bc5c9577-hwczp"
	Oct 25 10:33:00 default-k8s-diff-port-204074 kubelet[1315]: W1025 10:33:00.763902    1315 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/114adef2e3f9f4d970639b5c8a68c00b64371efcafc97f0bb2c3652589f1f63a/crio-96968b55d85c4bead38c7ddfc9c89d97df445146022f7f788576f9280c8a9af2 WatchSource:0}: Error finding container 96968b55d85c4bead38c7ddfc9c89d97df445146022f7f788576f9280c8a9af2: Status 404 returned error can't find the container with id 96968b55d85c4bead38c7ddfc9c89d97df445146022f7f788576f9280c8a9af2
	Oct 25 10:33:01 default-k8s-diff-port-204074 kubelet[1315]: I1025 10:33:01.369172    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-hwczp" podStartSLOduration=43.369148891 podStartE2EDuration="43.369148891s" podCreationTimestamp="2025-10-25 10:32:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:33:01.349704938 +0000 UTC m=+46.584147640" watchObservedRunningTime="2025-10-25 10:33:01.369148891 +0000 UTC m=+46.603591544"
	Oct 25 10:33:01 default-k8s-diff-port-204074 kubelet[1315]: I1025 10:33:01.386133    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.386111993 podStartE2EDuration="41.386111993s" podCreationTimestamp="2025-10-25 10:32:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:33:01.369914546 +0000 UTC m=+46.604357207" watchObservedRunningTime="2025-10-25 10:33:01.386111993 +0000 UTC m=+46.620554646"
	Oct 25 10:33:03 default-k8s-diff-port-204074 kubelet[1315]: I1025 10:33:03.632188    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwfjb\" (UniqueName: \"kubernetes.io/projected/b7d57fd1-b5eb-4724-9e1e-54fb753ba7cc-kube-api-access-jwfjb\") pod \"busybox\" (UID: \"b7d57fd1-b5eb-4724-9e1e-54fb753ba7cc\") " pod="default/busybox"
	Oct 25 10:33:03 default-k8s-diff-port-204074 kubelet[1315]: W1025 10:33:03.896180    1315 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/114adef2e3f9f4d970639b5c8a68c00b64371efcafc97f0bb2c3652589f1f63a/crio-81bdf9eb418528b27dda286639f92e6b765d5d2c88d654c5bca67ae87b6b6124 WatchSource:0}: Error finding container 81bdf9eb418528b27dda286639f92e6b765d5d2c88d654c5bca67ae87b6b6124: Status 404 returned error can't find the container with id 81bdf9eb418528b27dda286639f92e6b765d5d2c88d654c5bca67ae87b6b6124
	Oct 25 10:33:06 default-k8s-diff-port-204074 kubelet[1315]: I1025 10:33:06.364997    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.2340802100000001 podStartE2EDuration="3.364981491s" podCreationTimestamp="2025-10-25 10:33:03 +0000 UTC" firstStartedPulling="2025-10-25 10:33:03.898529551 +0000 UTC m=+49.132972204" lastFinishedPulling="2025-10-25 10:33:06.029430832 +0000 UTC m=+51.263873485" observedRunningTime="2025-10-25 10:33:06.364650853 +0000 UTC m=+51.599093514" watchObservedRunningTime="2025-10-25 10:33:06.364981491 +0000 UTC m=+51.599424152"
	
	
	==> storage-provisioner [c68639a5b2c0f935cfa8d6025bf7ede68f4f79ac6159a2fb6f1a8f5e777ff553] <==
	I1025 10:33:00.865424       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 10:33:00.907798       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 10:33:00.907916       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 10:33:00.911438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:33:00.932958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 10:33:00.933299       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 10:33:00.933505       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-204074_146c126c-cbdc-45f7-8458-0e01d79caa92!
	I1025 10:33:00.934470       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0b6162f7-ef21-4da6-838b-9cd22ec3453b", APIVersion:"v1", ResourceVersion:"458", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-204074_146c126c-cbdc-45f7-8458-0e01d79caa92 became leader
	W1025 10:33:00.942309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:33:00.946600       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 10:33:01.034077       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-204074_146c126c-cbdc-45f7-8458-0e01d79caa92!
	W1025 10:33:02.949741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:33:02.956216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:33:04.959473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:33:04.966198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:33:06.969588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:33:06.976373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:33:08.979060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:33:08.983265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:33:10.986881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:33:10.994601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:33:12.998464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:33:13.006140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-204074 -n default-k8s-diff-port-204074
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-204074 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-419185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-419185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (279.528551ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:33:48Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-419185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-419185 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-419185 describe deploy/metrics-server -n kube-system: exit status 1 (101.849592ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-419185 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-419185
helpers_test.go:243: (dbg) docker inspect embed-certs-419185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1fda185b5ef1eb2faf4fa928e32967ade3a0a627d5653f4c7f4c57d474b9fefa",
	        "Created": "2025-10-25T10:32:21.18342263Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 482565,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:32:21.251713993Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/1fda185b5ef1eb2faf4fa928e32967ade3a0a627d5653f4c7f4c57d474b9fefa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1fda185b5ef1eb2faf4fa928e32967ade3a0a627d5653f4c7f4c57d474b9fefa/hostname",
	        "HostsPath": "/var/lib/docker/containers/1fda185b5ef1eb2faf4fa928e32967ade3a0a627d5653f4c7f4c57d474b9fefa/hosts",
	        "LogPath": "/var/lib/docker/containers/1fda185b5ef1eb2faf4fa928e32967ade3a0a627d5653f4c7f4c57d474b9fefa/1fda185b5ef1eb2faf4fa928e32967ade3a0a627d5653f4c7f4c57d474b9fefa-json.log",
	        "Name": "/embed-certs-419185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-419185:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-419185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1fda185b5ef1eb2faf4fa928e32967ade3a0a627d5653f4c7f4c57d474b9fefa",
	                "LowerDir": "/var/lib/docker/overlay2/911a81a7ff1c2790f47bc97a79eabe5d1dbf9493b6cc35e93a7a32d7866a7cdf-init/diff:/var/lib/docker/overlay2/56244b4f60319ba406f5ebe406da3f45bd5764c167111669ae2d84794cf94518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/911a81a7ff1c2790f47bc97a79eabe5d1dbf9493b6cc35e93a7a32d7866a7cdf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/911a81a7ff1c2790f47bc97a79eabe5d1dbf9493b6cc35e93a7a32d7866a7cdf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/911a81a7ff1c2790f47bc97a79eabe5d1dbf9493b6cc35e93a7a32d7866a7cdf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-419185",
	                "Source": "/var/lib/docker/volumes/embed-certs-419185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-419185",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-419185",
	                "name.minikube.sigs.k8s.io": "embed-certs-419185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "53bc26ada20c0692625d292ccef8e2a544b7c0861ce438f5946d133641b2d244",
	            "SandboxKey": "/var/run/docker/netns/53bc26ada20c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-419185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0a:5b:e9:f7:89:05",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2813ac098a0563027aa465aa29bfe18ee37b22086f641503f6265d21106417e7",
	                    "EndpointID": "8eec3085a2e2620f0f92fe91e5652b483c0cd4906941b037b344d76993733e83",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-419185",
	                        "1fda185b5ef1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-419185 -n embed-certs-419185
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-419185 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-419185 logs -n 25: (1.668065734s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p kubernetes-upgrade-845331                                                                                                                                                                                                                  │ kubernetes-upgrade-845331    │ jenkins │ v1.37.0 │ 25 Oct 25 10:27 UTC │ 25 Oct 25 10:28 UTC │
	│ start   │ -p cert-expiration-313068 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-313068       │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:28 UTC │
	│ delete  │ -p force-systemd-env-068963                                                                                                                                                                                                                   │ force-systemd-env-068963     │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:28 UTC │
	│ start   │ -p cert-options-506318 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-506318          │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:28 UTC │
	│ ssh     │ cert-options-506318 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-506318          │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:28 UTC │
	│ ssh     │ -p cert-options-506318 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-506318          │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:28 UTC │
	│ delete  │ -p cert-options-506318                                                                                                                                                                                                                        │ cert-options-506318          │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:28 UTC │
	│ start   │ -p old-k8s-version-610853 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:29 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-610853 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:30 UTC │                     │
	│ stop    │ -p old-k8s-version-610853 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:30 UTC │ 25 Oct 25 10:30 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-610853 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:30 UTC │ 25 Oct 25 10:30 UTC │
	│ start   │ -p old-k8s-version-610853 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:30 UTC │ 25 Oct 25 10:31 UTC │
	│ image   │ old-k8s-version-610853 image list --format=json                                                                                                                                                                                               │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │ 25 Oct 25 10:31 UTC │
	│ pause   │ -p old-k8s-version-610853 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │                     │
	│ delete  │ -p old-k8s-version-610853                                                                                                                                                                                                                     │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │ 25 Oct 25 10:31 UTC │
	│ delete  │ -p old-k8s-version-610853                                                                                                                                                                                                                     │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │ 25 Oct 25 10:31 UTC │
	│ start   │ -p default-k8s-diff-port-204074 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │ 25 Oct 25 10:33 UTC │
	│ start   │ -p cert-expiration-313068 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-313068       │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │ 25 Oct 25 10:32 UTC │
	│ delete  │ -p cert-expiration-313068                                                                                                                                                                                                                     │ cert-expiration-313068       │ jenkins │ v1.37.0 │ 25 Oct 25 10:32 UTC │ 25 Oct 25 10:32 UTC │
	│ start   │ -p embed-certs-419185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:32 UTC │ 25 Oct 25 10:33 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-204074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-204074 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │ 25 Oct 25 10:33 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-204074 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │ 25 Oct 25 10:33 UTC │
	│ start   │ -p default-k8s-diff-port-204074 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-419185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:33:26
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:33:26.509660  485483 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:33:26.509837  485483 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:33:26.509867  485483 out.go:374] Setting ErrFile to fd 2...
	I1025 10:33:26.509886  485483 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:33:26.510146  485483 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 10:33:26.510564  485483 out.go:368] Setting JSON to false
	I1025 10:33:26.511679  485483 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8156,"bootTime":1761380250,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1025 10:33:26.511779  485483 start.go:141] virtualization:  
	I1025 10:33:26.514797  485483 out.go:179] * [default-k8s-diff-port-204074] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:33:26.518656  485483 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 10:33:26.518730  485483 notify.go:220] Checking for updates...
	I1025 10:33:26.524706  485483 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:33:26.527781  485483 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:33:26.530782  485483 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-292167/.minikube
	I1025 10:33:26.533850  485483 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:33:26.536978  485483 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:33:26.540248  485483 config.go:182] Loaded profile config "default-k8s-diff-port-204074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:33:26.540816  485483 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:33:26.566078  485483 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:33:26.566202  485483 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:33:26.627112  485483 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:33:26.61812822 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:33:26.627249  485483 docker.go:318] overlay module found
	I1025 10:33:26.630471  485483 out.go:179] * Using the docker driver based on existing profile
	I1025 10:33:26.633325  485483 start.go:305] selected driver: docker
	I1025 10:33:26.633351  485483 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-204074 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-204074 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:33:26.634486  485483 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:33:26.635542  485483 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:33:26.691762  485483 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:33:26.682633663 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:33:26.692111  485483 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:33:26.692148  485483 cni.go:84] Creating CNI manager for ""
	I1025 10:33:26.692211  485483 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:33:26.692253  485483 start.go:349] cluster config:
	{Name:default-k8s-diff-port-204074 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-204074 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:33:26.695336  485483 out.go:179] * Starting "default-k8s-diff-port-204074" primary control-plane node in "default-k8s-diff-port-204074" cluster
	I1025 10:33:26.698186  485483 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:33:26.701130  485483 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:33:26.703976  485483 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:33:26.704002  485483 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:33:26.704026  485483 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 10:33:26.704072  485483 cache.go:58] Caching tarball of preloaded images
	I1025 10:33:26.704157  485483 preload.go:233] Found /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:33:26.704168  485483 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:33:26.704280  485483 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/default-k8s-diff-port-204074/config.json ...
	I1025 10:33:26.723690  485483 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:33:26.723710  485483 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:33:26.723727  485483 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:33:26.723750  485483 start.go:360] acquireMachinesLock for default-k8s-diff-port-204074: {Name:mkd96684f4339071cedf0ba19aa38cca6816b0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:33:26.723811  485483 start.go:364] duration metric: took 38.474µs to acquireMachinesLock for "default-k8s-diff-port-204074"
	I1025 10:33:26.723833  485483 start.go:96] Skipping create...Using existing machine configuration
	I1025 10:33:26.723846  485483 fix.go:54] fixHost starting: 
	I1025 10:33:26.724128  485483 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-204074 --format={{.State.Status}}
	I1025 10:33:26.748962  485483 fix.go:112] recreateIfNeeded on default-k8s-diff-port-204074: state=Stopped err=<nil>
	W1025 10:33:26.749000  485483 fix.go:138] unexpected machine state, will restart: <nil>
	W1025 10:33:26.428550  481784 node_ready.go:57] node "embed-certs-419185" has "Ready":"False" status (will retry)
	W1025 10:33:28.429065  481784 node_ready.go:57] node "embed-certs-419185" has "Ready":"False" status (will retry)
	I1025 10:33:26.752223  485483 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-204074" ...
	I1025 10:33:26.752315  485483 cli_runner.go:164] Run: docker start default-k8s-diff-port-204074
	I1025 10:33:26.995105  485483 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-204074 --format={{.State.Status}}
	I1025 10:33:27.021487  485483 kic.go:430] container "default-k8s-diff-port-204074" state is running.
	I1025 10:33:27.021879  485483 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-204074
	I1025 10:33:27.044408  485483 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/default-k8s-diff-port-204074/config.json ...
	I1025 10:33:27.044794  485483 machine.go:93] provisionDockerMachine start ...
	I1025 10:33:27.044922  485483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-204074
	I1025 10:33:27.066212  485483 main.go:141] libmachine: Using SSH client type: native
	I1025 10:33:27.066534  485483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33442 <nil> <nil>}
	I1025 10:33:27.066543  485483 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:33:27.067270  485483 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60034->127.0.0.1:33442: read: connection reset by peer
	I1025 10:33:30.222935  485483 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-204074
	
	I1025 10:33:30.222965  485483 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-204074"
	I1025 10:33:30.223031  485483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-204074
	I1025 10:33:30.241199  485483 main.go:141] libmachine: Using SSH client type: native
	I1025 10:33:30.241508  485483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33442 <nil> <nil>}
	I1025 10:33:30.241519  485483 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-204074 && echo "default-k8s-diff-port-204074" | sudo tee /etc/hostname
	I1025 10:33:30.400274  485483 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-204074
	
	I1025 10:33:30.400360  485483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-204074
	I1025 10:33:30.422968  485483 main.go:141] libmachine: Using SSH client type: native
	I1025 10:33:30.423306  485483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33442 <nil> <nil>}
	I1025 10:33:30.423331  485483 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-204074' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-204074/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-204074' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:33:30.575398  485483 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:33:30.575427  485483 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-292167/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-292167/.minikube}
	I1025 10:33:30.575502  485483 ubuntu.go:190] setting up certificates
	I1025 10:33:30.575512  485483 provision.go:84] configureAuth start
	I1025 10:33:30.575581  485483 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-204074
	I1025 10:33:30.591991  485483 provision.go:143] copyHostCerts
	I1025 10:33:30.592068  485483 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem, removing ...
	I1025 10:33:30.592090  485483 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem
	I1025 10:33:30.592168  485483 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem (1082 bytes)
	I1025 10:33:30.592276  485483 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem, removing ...
	I1025 10:33:30.592287  485483 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem
	I1025 10:33:30.592320  485483 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem (1123 bytes)
	I1025 10:33:30.592430  485483 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem, removing ...
	I1025 10:33:30.592443  485483 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem
	I1025 10:33:30.592476  485483 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem (1675 bytes)
	I1025 10:33:30.592532  485483 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-204074 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-204074 localhost minikube]
	I1025 10:33:30.683869  485483 provision.go:177] copyRemoteCerts
	I1025 10:33:30.683937  485483 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:33:30.683994  485483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-204074
	I1025 10:33:30.702243  485483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33442 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/default-k8s-diff-port-204074/id_rsa Username:docker}
	I1025 10:33:30.811259  485483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 10:33:30.828988  485483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1025 10:33:30.846967  485483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1025 10:33:30.865104  485483 provision.go:87] duration metric: took 289.56908ms to configureAuth
	I1025 10:33:30.865132  485483 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:33:30.865321  485483 config.go:182] Loaded profile config "default-k8s-diff-port-204074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:33:30.865433  485483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-204074
	I1025 10:33:30.882223  485483 main.go:141] libmachine: Using SSH client type: native
	I1025 10:33:30.882548  485483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33442 <nil> <nil>}
	I1025 10:33:30.882572  485483 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:33:31.205792  485483 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:33:31.205856  485483 machine.go:96] duration metric: took 4.161047744s to provisionDockerMachine
	I1025 10:33:31.205893  485483 start.go:293] postStartSetup for "default-k8s-diff-port-204074" (driver="docker")
	I1025 10:33:31.205924  485483 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:33:31.206048  485483 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:33:31.206128  485483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-204074
	I1025 10:33:31.230624  485483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33442 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/default-k8s-diff-port-204074/id_rsa Username:docker}
	I1025 10:33:31.340103  485483 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:33:31.343480  485483 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:33:31.343508  485483 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:33:31.343519  485483 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/addons for local assets ...
	I1025 10:33:31.343572  485483 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/files for local assets ...
	I1025 10:33:31.343650  485483 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem -> 2940172.pem in /etc/ssl/certs
	I1025 10:33:31.343752  485483 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:33:31.351436  485483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 10:33:31.368653  485483 start.go:296] duration metric: took 162.725147ms for postStartSetup
	I1025 10:33:31.368731  485483 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:33:31.368770  485483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-204074
	I1025 10:33:31.385770  485483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33442 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/default-k8s-diff-port-204074/id_rsa Username:docker}
	I1025 10:33:31.488092  485483 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:33:31.492836  485483 fix.go:56] duration metric: took 4.768982395s for fixHost
	I1025 10:33:31.492859  485483 start.go:83] releasing machines lock for "default-k8s-diff-port-204074", held for 4.769036401s
	I1025 10:33:31.492925  485483 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-204074
	I1025 10:33:31.510180  485483 ssh_runner.go:195] Run: cat /version.json
	I1025 10:33:31.510240  485483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-204074
	I1025 10:33:31.510564  485483 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:33:31.510614  485483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-204074
	I1025 10:33:31.529231  485483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33442 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/default-k8s-diff-port-204074/id_rsa Username:docker}
	I1025 10:33:31.530197  485483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33442 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/default-k8s-diff-port-204074/id_rsa Username:docker}
	I1025 10:33:31.715863  485483 ssh_runner.go:195] Run: systemctl --version
	I1025 10:33:31.722389  485483 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:33:31.766775  485483 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:33:31.770984  485483 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:33:31.771120  485483 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:33:31.779635  485483 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:33:31.779659  485483 start.go:495] detecting cgroup driver to use...
	I1025 10:33:31.779692  485483 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:33:31.779742  485483 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:33:31.800186  485483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:33:31.814175  485483 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:33:31.814298  485483 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:33:31.829406  485483 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:33:31.843124  485483 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:33:31.962330  485483 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:33:32.088480  485483 docker.go:234] disabling docker service ...
	I1025 10:33:32.088565  485483 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:33:32.105241  485483 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:33:32.118777  485483 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:33:32.245526  485483 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:33:32.367302  485483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:33:32.381090  485483 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:33:32.396507  485483 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:33:32.396602  485483 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:33:32.405922  485483 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:33:32.406039  485483 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:33:32.415554  485483 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:33:32.425357  485483 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:33:32.438641  485483 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:33:32.447301  485483 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:33:32.456318  485483 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:33:32.465920  485483 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:33:32.474684  485483 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:33:32.482093  485483 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:33:32.489325  485483 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:33:32.620316  485483 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:33:32.773507  485483 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:33:32.773630  485483 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:33:32.777683  485483 start.go:563] Will wait 60s for crictl version
	I1025 10:33:32.777792  485483 ssh_runner.go:195] Run: which crictl
	I1025 10:33:32.781456  485483 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:33:32.806108  485483 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:33:32.806253  485483 ssh_runner.go:195] Run: crio --version
	I1025 10:33:32.833558  485483 ssh_runner.go:195] Run: crio --version
	I1025 10:33:32.868534  485483 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:33:32.871404  485483 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-204074 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:33:32.892862  485483 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1025 10:33:32.896908  485483 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:33:32.906716  485483 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-204074 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-204074 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:33:32.906845  485483 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:33:32.906913  485483 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:33:32.941448  485483 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:33:32.941472  485483 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:33:32.941526  485483 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:33:32.967578  485483 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:33:32.967598  485483 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:33:32.967606  485483 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1025 10:33:32.967704  485483 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-204074 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-204074 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:33:32.967786  485483 ssh_runner.go:195] Run: crio config
	I1025 10:33:33.027801  485483 cni.go:84] Creating CNI manager for ""
	I1025 10:33:33.027829  485483 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:33:33.027902  485483 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:33:33.027938  485483 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-204074 NodeName:default-k8s-diff-port-204074 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:33:33.028098  485483 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-204074"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:33:33.028177  485483 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:33:33.036293  485483 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:33:33.036417  485483 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:33:33.044591  485483 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1025 10:33:33.058202  485483 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:33:33.072820  485483 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1025 10:33:33.087691  485483 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:33:33.091538  485483 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:33:33.102157  485483 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:33:33.217528  485483 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:33:33.240826  485483 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/default-k8s-diff-port-204074 for IP: 192.168.85.2
	I1025 10:33:33.240896  485483 certs.go:195] generating shared ca certs ...
	I1025 10:33:33.240927  485483 certs.go:227] acquiring lock for ca certs: {Name:mk9438893ca511bbb3aa6154ee7b6f94b409696d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:33:33.241090  485483 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key
	I1025 10:33:33.241177  485483 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key
	I1025 10:33:33.241200  485483 certs.go:257] generating profile certs ...
	I1025 10:33:33.241303  485483 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/default-k8s-diff-port-204074/client.key
	I1025 10:33:33.241396  485483 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/default-k8s-diff-port-204074/apiserver.key.5f4a3ffb
	I1025 10:33:33.241474  485483 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/default-k8s-diff-port-204074/proxy-client.key
	I1025 10:33:33.241628  485483 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem (1338 bytes)
	W1025 10:33:33.241691  485483 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017_empty.pem, impossibly tiny 0 bytes
	I1025 10:33:33.241716  485483 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 10:33:33.241773  485483 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem (1082 bytes)
	I1025 10:33:33.241818  485483 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:33:33.241871  485483 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem (1675 bytes)
	I1025 10:33:33.241945  485483 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 10:33:33.242619  485483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:33:33.262682  485483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:33:33.283226  485483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:33:33.304144  485483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:33:33.329420  485483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/default-k8s-diff-port-204074/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1025 10:33:33.349796  485483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/default-k8s-diff-port-204074/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:33:33.379999  485483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/default-k8s-diff-port-204074/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:33:33.407776  485483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/default-k8s-diff-port-204074/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:33:33.442482  485483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /usr/share/ca-certificates/2940172.pem (1708 bytes)
	I1025 10:33:33.466115  485483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:33:33.487083  485483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem --> /usr/share/ca-certificates/294017.pem (1338 bytes)
	I1025 10:33:33.508796  485483 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:33:33.525615  485483 ssh_runner.go:195] Run: openssl version
	I1025 10:33:33.532388  485483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:33:33.540836  485483 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:33:33.544737  485483 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:33:33.544804  485483 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:33:33.591554  485483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:33:33.600685  485483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294017.pem && ln -fs /usr/share/ca-certificates/294017.pem /etc/ssl/certs/294017.pem"
	I1025 10:33:33.608967  485483 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294017.pem
	I1025 10:33:33.612826  485483 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:39 /usr/share/ca-certificates/294017.pem
	I1025 10:33:33.612930  485483 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294017.pem
	I1025 10:33:33.654120  485483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294017.pem /etc/ssl/certs/51391683.0"
	I1025 10:33:33.661979  485483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940172.pem && ln -fs /usr/share/ca-certificates/2940172.pem /etc/ssl/certs/2940172.pem"
	I1025 10:33:33.671394  485483 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940172.pem
	I1025 10:33:33.674982  485483 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:39 /usr/share/ca-certificates/2940172.pem
	I1025 10:33:33.675071  485483 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940172.pem
	I1025 10:33:33.716162  485483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940172.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:33:33.724204  485483 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:33:33.728280  485483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:33:33.769849  485483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:33:33.811814  485483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:33:33.868529  485483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:33:33.928937  485483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:33:34.003632  485483 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:33:34.094560  485483 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-204074 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-204074 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:33:34.094723  485483 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:33:34.094822  485483 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:33:34.150229  485483 cri.go:89] found id: "4ecd5c6991209a440ef676eded1a237dc4635cc52d88167118cd3ff569d669ed"
	I1025 10:33:34.150304  485483 cri.go:89] found id: "802d4fb83a2b952f13deb4266ef1896d827f97ddd11eae2520744994b5769f3e"
	I1025 10:33:34.150332  485483 cri.go:89] found id: "357c1c33e5336db1d9aacea8e98741b1db7d0a5f46bb4c275e97202edaa35037"
	I1025 10:33:34.150353  485483 cri.go:89] found id: "cf19925569a9e3157327f48321ecad645bed37c06789fbc66df79fd9cf9c8310"
	I1025 10:33:34.150389  485483 cri.go:89] found id: ""
	I1025 10:33:34.150467  485483 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 10:33:34.167961  485483 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:33:34Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:33:34.168117  485483 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:33:34.185011  485483 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:33:34.185080  485483 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:33:34.185175  485483 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:33:34.196593  485483 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:33:34.197489  485483 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-204074" does not appear in /home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:33:34.198050  485483 kubeconfig.go:62] /home/jenkins/minikube-integration/21794-292167/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-204074" cluster setting kubeconfig missing "default-k8s-diff-port-204074" context setting]
	I1025 10:33:34.198877  485483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/kubeconfig: {Name:mk3dba620aff9c31a3afd43d9db4d9ff5be75367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:33:34.200870  485483 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:33:34.211926  485483 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1025 10:33:34.212004  485483 kubeadm.go:601] duration metric: took 26.902853ms to restartPrimaryControlPlane
	I1025 10:33:34.212028  485483 kubeadm.go:402] duration metric: took 117.477587ms to StartCluster
	I1025 10:33:34.212079  485483 settings.go:142] acquiring lock: {Name:mkea67a33832f2ea491a4c745ccd836174c61655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:33:34.212157  485483 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:33:34.213663  485483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/kubeconfig: {Name:mk3dba620aff9c31a3afd43d9db4d9ff5be75367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:33:34.213933  485483 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:33:34.214437  485483 config.go:182] Loaded profile config "default-k8s-diff-port-204074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:33:34.214425  485483 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:33:34.214513  485483 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-204074"
	I1025 10:33:34.214540  485483 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-204074"
	W1025 10:33:34.214550  485483 addons.go:247] addon storage-provisioner should already be in state true
	I1025 10:33:34.214566  485483 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-204074"
	I1025 10:33:34.214582  485483 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-204074"
	I1025 10:33:34.214597  485483 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-204074"
	I1025 10:33:34.214612  485483 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-204074"
	W1025 10:33:34.214647  485483 addons.go:247] addon dashboard should already be in state true
	I1025 10:33:34.214687  485483 host.go:66] Checking if "default-k8s-diff-port-204074" exists ...
	I1025 10:33:34.214912  485483 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-204074 --format={{.State.Status}}
	I1025 10:33:34.215411  485483 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-204074 --format={{.State.Status}}
	I1025 10:33:34.214575  485483 host.go:66] Checking if "default-k8s-diff-port-204074" exists ...
	I1025 10:33:34.216187  485483 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-204074 --format={{.State.Status}}
	I1025 10:33:34.220181  485483 out.go:179] * Verifying Kubernetes components...
	I1025 10:33:34.223361  485483 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:33:34.274571  485483 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-204074"
	W1025 10:33:34.274598  485483 addons.go:247] addon default-storageclass should already be in state true
	I1025 10:33:34.274622  485483 host.go:66] Checking if "default-k8s-diff-port-204074" exists ...
	I1025 10:33:34.275027  485483 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-204074 --format={{.State.Status}}
	I1025 10:33:34.278187  485483 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 10:33:34.281155  485483 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:33:34.284286  485483 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:33:34.284308  485483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:33:34.284372  485483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-204074
	I1025 10:33:34.284450  485483 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1025 10:33:30.928628  481784 node_ready.go:57] node "embed-certs-419185" has "Ready":"False" status (will retry)
	W1025 10:33:32.928717  481784 node_ready.go:57] node "embed-certs-419185" has "Ready":"False" status (will retry)
	W1025 10:33:34.937149  481784 node_ready.go:57] node "embed-certs-419185" has "Ready":"False" status (will retry)
	I1025 10:33:34.287354  485483 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 10:33:34.287383  485483 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 10:33:34.287450  485483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-204074
	I1025 10:33:34.316886  485483 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:33:34.316909  485483 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:33:34.316993  485483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-204074
	I1025 10:33:34.346607  485483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33442 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/default-k8s-diff-port-204074/id_rsa Username:docker}
	I1025 10:33:34.348803  485483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33442 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/default-k8s-diff-port-204074/id_rsa Username:docker}
	I1025 10:33:34.374294  485483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33442 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/default-k8s-diff-port-204074/id_rsa Username:docker}
	I1025 10:33:34.583652  485483 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 10:33:34.583675  485483 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 10:33:34.583972  485483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:33:34.604323  485483 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:33:34.635859  485483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:33:34.643006  485483 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 10:33:34.643068  485483 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 10:33:34.755973  485483 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 10:33:34.756038  485483 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 10:33:34.836586  485483 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 10:33:34.836662  485483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 10:33:34.890624  485483 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 10:33:34.890689  485483 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 10:33:34.960443  485483 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 10:33:34.960510  485483 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 10:33:34.992859  485483 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 10:33:34.992933  485483 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 10:33:35.029597  485483 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 10:33:35.029692  485483 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 10:33:35.051774  485483 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:33:35.051848  485483 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 10:33:35.068860  485483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:33:35.434015  481784 node_ready.go:49] node "embed-certs-419185" is "Ready"
	I1025 10:33:35.434044  481784 node_ready.go:38] duration metric: took 42.508900126s for node "embed-certs-419185" to be "Ready" ...
	I1025 10:33:35.434057  481784 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:33:35.434128  481784 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:33:35.487400  481784 api_server.go:72] duration metric: took 43.54137548s to wait for apiserver process to appear ...
	I1025 10:33:35.487424  481784 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:33:35.487445  481784 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:33:35.519973  481784 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1025 10:33:35.527453  481784 api_server.go:141] control plane version: v1.34.1
	I1025 10:33:35.527492  481784 api_server.go:131] duration metric: took 40.056364ms to wait for apiserver health ...
	I1025 10:33:35.527502  481784 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:33:35.540942  481784 system_pods.go:59] 8 kube-system pods found
	I1025 10:33:35.540982  481784 system_pods.go:61] "coredns-66bc5c9577-q85rh" [e4d97f26-45e9-46af-a009-111b0a00784f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:33:35.540990  481784 system_pods.go:61] "etcd-embed-certs-419185" [e9e7d435-1b4b-478b-947e-0e301e8167af] Running
	I1025 10:33:35.540999  481784 system_pods.go:61] "kindnet-4ncnd" [1b443cbc-f209-4f7f-af12-0461716bb2d0] Running
	I1025 10:33:35.541003  481784 system_pods.go:61] "kube-apiserver-embed-certs-419185" [e60429f2-4545-479d-83f0-58c50867e833] Running
	I1025 10:33:35.541009  481784 system_pods.go:61] "kube-controller-manager-embed-certs-419185" [8822bf04-b788-4fa7-824c-6566f079e081] Running
	I1025 10:33:35.541012  481784 system_pods.go:61] "kube-proxy-2vqfc" [9b8b587a-0b0d-4176-a9b1-167e6fc8b1e7] Running
	I1025 10:33:35.541017  481784 system_pods.go:61] "kube-scheduler-embed-certs-419185" [2c573bf0-c9a1-4158-91e8-683bc73f60d4] Running
	I1025 10:33:35.541027  481784 system_pods.go:61] "storage-provisioner" [662f0cd5-ae79-463a-8a7a-f84ef27d6fee] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:33:35.541033  481784 system_pods.go:74] duration metric: took 13.525635ms to wait for pod list to return data ...
	I1025 10:33:35.541042  481784 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:33:35.549475  481784 default_sa.go:45] found service account: "default"
	I1025 10:33:35.549502  481784 default_sa.go:55] duration metric: took 8.45316ms for default service account to be created ...
	I1025 10:33:35.549518  481784 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:33:35.658437  481784 system_pods.go:86] 8 kube-system pods found
	I1025 10:33:35.658535  481784 system_pods.go:89] "coredns-66bc5c9577-q85rh" [e4d97f26-45e9-46af-a009-111b0a00784f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:33:35.658557  481784 system_pods.go:89] "etcd-embed-certs-419185" [e9e7d435-1b4b-478b-947e-0e301e8167af] Running
	I1025 10:33:35.658601  481784 system_pods.go:89] "kindnet-4ncnd" [1b443cbc-f209-4f7f-af12-0461716bb2d0] Running
	I1025 10:33:35.658627  481784 system_pods.go:89] "kube-apiserver-embed-certs-419185" [e60429f2-4545-479d-83f0-58c50867e833] Running
	I1025 10:33:35.658650  481784 system_pods.go:89] "kube-controller-manager-embed-certs-419185" [8822bf04-b788-4fa7-824c-6566f079e081] Running
	I1025 10:33:35.658688  481784 system_pods.go:89] "kube-proxy-2vqfc" [9b8b587a-0b0d-4176-a9b1-167e6fc8b1e7] Running
	I1025 10:33:35.658721  481784 system_pods.go:89] "kube-scheduler-embed-certs-419185" [2c573bf0-c9a1-4158-91e8-683bc73f60d4] Running
	I1025 10:33:35.658767  481784 system_pods.go:89] "storage-provisioner" [662f0cd5-ae79-463a-8a7a-f84ef27d6fee] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:33:35.658812  481784 retry.go:31] will retry after 310.656982ms: missing components: kube-dns
	I1025 10:33:35.974581  481784 system_pods.go:86] 8 kube-system pods found
	I1025 10:33:35.974684  481784 system_pods.go:89] "coredns-66bc5c9577-q85rh" [e4d97f26-45e9-46af-a009-111b0a00784f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:33:35.974721  481784 system_pods.go:89] "etcd-embed-certs-419185" [e9e7d435-1b4b-478b-947e-0e301e8167af] Running
	I1025 10:33:35.974772  481784 system_pods.go:89] "kindnet-4ncnd" [1b443cbc-f209-4f7f-af12-0461716bb2d0] Running
	I1025 10:33:35.974809  481784 system_pods.go:89] "kube-apiserver-embed-certs-419185" [e60429f2-4545-479d-83f0-58c50867e833] Running
	I1025 10:33:35.974851  481784 system_pods.go:89] "kube-controller-manager-embed-certs-419185" [8822bf04-b788-4fa7-824c-6566f079e081] Running
	I1025 10:33:35.974877  481784 system_pods.go:89] "kube-proxy-2vqfc" [9b8b587a-0b0d-4176-a9b1-167e6fc8b1e7] Running
	I1025 10:33:35.974932  481784 system_pods.go:89] "kube-scheduler-embed-certs-419185" [2c573bf0-c9a1-4158-91e8-683bc73f60d4] Running
	I1025 10:33:35.974965  481784 system_pods.go:89] "storage-provisioner" [662f0cd5-ae79-463a-8a7a-f84ef27d6fee] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:33:35.975029  481784 retry.go:31] will retry after 288.578725ms: missing components: kube-dns
	I1025 10:33:36.268153  481784 system_pods.go:86] 8 kube-system pods found
	I1025 10:33:36.268237  481784 system_pods.go:89] "coredns-66bc5c9577-q85rh" [e4d97f26-45e9-46af-a009-111b0a00784f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:33:36.268258  481784 system_pods.go:89] "etcd-embed-certs-419185" [e9e7d435-1b4b-478b-947e-0e301e8167af] Running
	I1025 10:33:36.268283  481784 system_pods.go:89] "kindnet-4ncnd" [1b443cbc-f209-4f7f-af12-0461716bb2d0] Running
	I1025 10:33:36.268314  481784 system_pods.go:89] "kube-apiserver-embed-certs-419185" [e60429f2-4545-479d-83f0-58c50867e833] Running
	I1025 10:33:36.268338  481784 system_pods.go:89] "kube-controller-manager-embed-certs-419185" [8822bf04-b788-4fa7-824c-6566f079e081] Running
	I1025 10:33:36.268359  481784 system_pods.go:89] "kube-proxy-2vqfc" [9b8b587a-0b0d-4176-a9b1-167e6fc8b1e7] Running
	I1025 10:33:36.268394  481784 system_pods.go:89] "kube-scheduler-embed-certs-419185" [2c573bf0-c9a1-4158-91e8-683bc73f60d4] Running
	I1025 10:33:36.268420  481784 system_pods.go:89] "storage-provisioner" [662f0cd5-ae79-463a-8a7a-f84ef27d6fee] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:33:36.268450  481784 retry.go:31] will retry after 483.369474ms: missing components: kube-dns
	I1025 10:33:36.756508  481784 system_pods.go:86] 8 kube-system pods found
	I1025 10:33:36.756584  481784 system_pods.go:89] "coredns-66bc5c9577-q85rh" [e4d97f26-45e9-46af-a009-111b0a00784f] Running
	I1025 10:33:36.756605  481784 system_pods.go:89] "etcd-embed-certs-419185" [e9e7d435-1b4b-478b-947e-0e301e8167af] Running
	I1025 10:33:36.756625  481784 system_pods.go:89] "kindnet-4ncnd" [1b443cbc-f209-4f7f-af12-0461716bb2d0] Running
	I1025 10:33:36.756660  481784 system_pods.go:89] "kube-apiserver-embed-certs-419185" [e60429f2-4545-479d-83f0-58c50867e833] Running
	I1025 10:33:36.756687  481784 system_pods.go:89] "kube-controller-manager-embed-certs-419185" [8822bf04-b788-4fa7-824c-6566f079e081] Running
	I1025 10:33:36.756710  481784 system_pods.go:89] "kube-proxy-2vqfc" [9b8b587a-0b0d-4176-a9b1-167e6fc8b1e7] Running
	I1025 10:33:36.756744  481784 system_pods.go:89] "kube-scheduler-embed-certs-419185" [2c573bf0-c9a1-4158-91e8-683bc73f60d4] Running
	I1025 10:33:36.756768  481784 system_pods.go:89] "storage-provisioner" [662f0cd5-ae79-463a-8a7a-f84ef27d6fee] Running
	I1025 10:33:36.756792  481784 system_pods.go:126] duration metric: took 1.207266695s to wait for k8s-apps to be running ...
	I1025 10:33:36.756828  481784 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:33:36.756924  481784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:33:36.775487  481784 system_svc.go:56] duration metric: took 18.648932ms WaitForService to wait for kubelet
	I1025 10:33:36.775517  481784 kubeadm.go:586] duration metric: took 44.829499454s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:33:36.775539  481784 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:33:36.781171  481784 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:33:36.781204  481784 node_conditions.go:123] node cpu capacity is 2
	I1025 10:33:36.781218  481784 node_conditions.go:105] duration metric: took 5.673236ms to run NodePressure ...
	I1025 10:33:36.781229  481784 start.go:241] waiting for startup goroutines ...
	I1025 10:33:36.781237  481784 start.go:246] waiting for cluster config update ...
	I1025 10:33:36.781249  481784 start.go:255] writing updated cluster config ...
	I1025 10:33:36.781533  481784 ssh_runner.go:195] Run: rm -f paused
	I1025 10:33:36.785429  481784 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:33:36.789162  481784 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-q85rh" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:33:36.796800  481784 pod_ready.go:94] pod "coredns-66bc5c9577-q85rh" is "Ready"
	I1025 10:33:36.796828  481784 pod_ready.go:86] duration metric: took 7.639022ms for pod "coredns-66bc5c9577-q85rh" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:33:36.804672  481784 pod_ready.go:83] waiting for pod "etcd-embed-certs-419185" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:33:36.816360  481784 pod_ready.go:94] pod "etcd-embed-certs-419185" is "Ready"
	I1025 10:33:36.816386  481784 pod_ready.go:86] duration metric: took 11.687909ms for pod "etcd-embed-certs-419185" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:33:36.822081  481784 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-419185" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:33:36.831392  481784 pod_ready.go:94] pod "kube-apiserver-embed-certs-419185" is "Ready"
	I1025 10:33:36.831421  481784 pod_ready.go:86] duration metric: took 9.312314ms for pod "kube-apiserver-embed-certs-419185" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:33:36.835864  481784 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-419185" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:33:37.190380  481784 pod_ready.go:94] pod "kube-controller-manager-embed-certs-419185" is "Ready"
	I1025 10:33:37.190417  481784 pod_ready.go:86] duration metric: took 354.525982ms for pod "kube-controller-manager-embed-certs-419185" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:33:37.389872  481784 pod_ready.go:83] waiting for pod "kube-proxy-2vqfc" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:33:37.789628  481784 pod_ready.go:94] pod "kube-proxy-2vqfc" is "Ready"
	I1025 10:33:37.789657  481784 pod_ready.go:86] duration metric: took 399.75746ms for pod "kube-proxy-2vqfc" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:33:37.990117  481784 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-419185" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:33:38.390052  481784 pod_ready.go:94] pod "kube-scheduler-embed-certs-419185" is "Ready"
	I1025 10:33:38.390081  481784 pod_ready.go:86] duration metric: took 399.9358ms for pod "kube-scheduler-embed-certs-419185" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:33:38.390097  481784 pod_ready.go:40] duration metric: took 1.604632123s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:33:38.510746  481784 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 10:33:38.515902  481784 out.go:179] * Done! kubectl is now configured to use "embed-certs-419185" cluster and "default" namespace by default
	I1025 10:33:41.038051  485483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.454051304s)
	I1025 10:33:41.038122  485483 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.433779148s)
	I1025 10:33:41.038157  485483 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-204074" to be "Ready" ...
	I1025 10:33:41.038480  485483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.40255478s)
	I1025 10:33:41.038733  485483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.969764708s)
	I1025 10:33:41.042108  485483 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-204074 addons enable metrics-server
	
	I1025 10:33:41.085791  485483 node_ready.go:49] node "default-k8s-diff-port-204074" is "Ready"
	I1025 10:33:41.085830  485483 node_ready.go:38] duration metric: took 47.64649ms for node "default-k8s-diff-port-204074" to be "Ready" ...
	I1025 10:33:41.085843  485483 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:33:41.086028  485483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:33:41.132499  485483 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1025 10:33:41.135835  485483 addons.go:514] duration metric: took 6.921390512s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1025 10:33:41.143857  485483 api_server.go:72] duration metric: took 6.929769193s to wait for apiserver process to appear ...
	I1025 10:33:41.143878  485483 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:33:41.143897  485483 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1025 10:33:41.155255  485483 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:33:41.155302  485483 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:33:41.644962  485483 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1025 10:33:41.655803  485483 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1025 10:33:41.657186  485483 api_server.go:141] control plane version: v1.34.1
	I1025 10:33:41.657211  485483 api_server.go:131] duration metric: took 513.324925ms to wait for apiserver health ...
	I1025 10:33:41.657221  485483 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:33:41.660989  485483 system_pods.go:59] 8 kube-system pods found
	I1025 10:33:41.661068  485483 system_pods.go:61] "coredns-66bc5c9577-hwczp" [76f57b6b-0326-45da-9fb3-89258d0b3cd7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:33:41.661096  485483 system_pods.go:61] "etcd-default-k8s-diff-port-204074" [f50bb999-aa6a-400e-9873-cf0638cd4600] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:33:41.661135  485483 system_pods.go:61] "kindnet-pt5xf" [58092077-ef5b-4a3a-ac91-d90b746fb830] Running
	I1025 10:33:41.661164  485483 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-204074" [aadf7462-83f7-49c2-952e-c8e4f2b1e744] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:33:41.661189  485483 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-204074" [16d8de03-a05e-43f7-a7f3-0c40609270b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:33:41.661212  485483 system_pods.go:61] "kube-proxy-qcgkj" [8c1000b6-fabf-48f5-add5-0ff29481b2cd] Running
	I1025 10:33:41.661250  485483 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-204074" [02e676ca-6d73-4ba5-aa35-01d99d296133] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:33:41.661278  485483 system_pods.go:61] "storage-provisioner" [7dcb2226-851a-46f1-8af1-0e796e81167c] Running
	I1025 10:33:41.661302  485483 system_pods.go:74] duration metric: took 4.074324ms to wait for pod list to return data ...
	I1025 10:33:41.661323  485483 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:33:41.664357  485483 default_sa.go:45] found service account: "default"
	I1025 10:33:41.664424  485483 default_sa.go:55] duration metric: took 3.065647ms for default service account to be created ...
	I1025 10:33:41.664447  485483 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:33:41.668109  485483 system_pods.go:86] 8 kube-system pods found
	I1025 10:33:41.668201  485483 system_pods.go:89] "coredns-66bc5c9577-hwczp" [76f57b6b-0326-45da-9fb3-89258d0b3cd7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:33:41.668227  485483 system_pods.go:89] "etcd-default-k8s-diff-port-204074" [f50bb999-aa6a-400e-9873-cf0638cd4600] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:33:41.668271  485483 system_pods.go:89] "kindnet-pt5xf" [58092077-ef5b-4a3a-ac91-d90b746fb830] Running
	I1025 10:33:41.668292  485483 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-204074" [aadf7462-83f7-49c2-952e-c8e4f2b1e744] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:33:41.668319  485483 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-204074" [16d8de03-a05e-43f7-a7f3-0c40609270b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:33:41.668371  485483 system_pods.go:89] "kube-proxy-qcgkj" [8c1000b6-fabf-48f5-add5-0ff29481b2cd] Running
	I1025 10:33:41.668392  485483 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-204074" [02e676ca-6d73-4ba5-aa35-01d99d296133] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:33:41.668413  485483 system_pods.go:89] "storage-provisioner" [7dcb2226-851a-46f1-8af1-0e796e81167c] Running
	I1025 10:33:41.668448  485483 system_pods.go:126] duration metric: took 3.98117ms to wait for k8s-apps to be running ...
	I1025 10:33:41.668470  485483 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:33:41.668555  485483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:33:41.683696  485483 system_svc.go:56] duration metric: took 15.214682ms WaitForService to wait for kubelet
	I1025 10:33:41.683767  485483 kubeadm.go:586] duration metric: took 7.469684171s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:33:41.683804  485483 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:33:41.687225  485483 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:33:41.687305  485483 node_conditions.go:123] node cpu capacity is 2
	I1025 10:33:41.687338  485483 node_conditions.go:105] duration metric: took 3.50452ms to run NodePressure ...
	I1025 10:33:41.687377  485483 start.go:241] waiting for startup goroutines ...
	I1025 10:33:41.687401  485483 start.go:246] waiting for cluster config update ...
	I1025 10:33:41.687426  485483 start.go:255] writing updated cluster config ...
	I1025 10:33:41.687737  485483 ssh_runner.go:195] Run: rm -f paused
	I1025 10:33:41.692337  485483 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:33:41.696539  485483 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hwczp" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 10:33:43.703013  485483 pod_ready.go:104] pod "coredns-66bc5c9577-hwczp" is not "Ready", error: <nil>
	W1025 10:33:46.203083  485483 pod_ready.go:104] pod "coredns-66bc5c9577-hwczp" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 25 10:33:35 embed-certs-419185 crio[835]: time="2025-10-25T10:33:35.481523545Z" level=info msg="Created container 271b4529c50e9af5d1f965f1201c34bb0725c6a3a0c3997405cc5005008a0e00: kube-system/coredns-66bc5c9577-q85rh/coredns" id=55605aef-253e-4543-82a3-5bb76a7b9490 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:33:35 embed-certs-419185 crio[835]: time="2025-10-25T10:33:35.483560756Z" level=info msg="Starting container: 271b4529c50e9af5d1f965f1201c34bb0725c6a3a0c3997405cc5005008a0e00" id=4acfe9f1-b228-48eb-940a-f9471d086798 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:33:35 embed-certs-419185 crio[835]: time="2025-10-25T10:33:35.494193432Z" level=info msg="Started container" PID=1726 containerID=271b4529c50e9af5d1f965f1201c34bb0725c6a3a0c3997405cc5005008a0e00 description=kube-system/coredns-66bc5c9577-q85rh/coredns id=4acfe9f1-b228-48eb-940a-f9471d086798 name=/runtime.v1.RuntimeService/StartContainer sandboxID=787ced2898a11546034c54acdc322f3b743c649dac9986a928d162d35cd18b0e
	Oct 25 10:33:39 embed-certs-419185 crio[835]: time="2025-10-25T10:33:39.095990236Z" level=info msg="Running pod sandbox: default/busybox/POD" id=d5b3a4a7-4a31-4bef-b03c-73803baff637 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:33:39 embed-certs-419185 crio[835]: time="2025-10-25T10:33:39.096079985Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:33:39 embed-certs-419185 crio[835]: time="2025-10-25T10:33:39.121786835Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:31ae64c654f5468d2af5665ca8e56f997df68cc69c4f7edf73bc38e874fa7dcf UID:8c21ab0b-2754-4861-96bc-2019ef1c2e7d NetNS:/var/run/netns/e3b384fc-df4e-41ea-87b3-d5aee3ba0b44 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012c840}] Aliases:map[]}"
	Oct 25 10:33:39 embed-certs-419185 crio[835]: time="2025-10-25T10:33:39.121854577Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 25 10:33:39 embed-certs-419185 crio[835]: time="2025-10-25T10:33:39.133904641Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:31ae64c654f5468d2af5665ca8e56f997df68cc69c4f7edf73bc38e874fa7dcf UID:8c21ab0b-2754-4861-96bc-2019ef1c2e7d NetNS:/var/run/netns/e3b384fc-df4e-41ea-87b3-d5aee3ba0b44 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012c840}] Aliases:map[]}"
	Oct 25 10:33:39 embed-certs-419185 crio[835]: time="2025-10-25T10:33:39.134059826Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 25 10:33:39 embed-certs-419185 crio[835]: time="2025-10-25T10:33:39.142878612Z" level=info msg="Ran pod sandbox 31ae64c654f5468d2af5665ca8e56f997df68cc69c4f7edf73bc38e874fa7dcf with infra container: default/busybox/POD" id=d5b3a4a7-4a31-4bef-b03c-73803baff637 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:33:39 embed-certs-419185 crio[835]: time="2025-10-25T10:33:39.144015824Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e5b6f263-08b4-4a49-a442-4b6fa31586d8 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:33:39 embed-certs-419185 crio[835]: time="2025-10-25T10:33:39.144169418Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e5b6f263-08b4-4a49-a442-4b6fa31586d8 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:33:39 embed-certs-419185 crio[835]: time="2025-10-25T10:33:39.144208893Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=e5b6f263-08b4-4a49-a442-4b6fa31586d8 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:33:39 embed-certs-419185 crio[835]: time="2025-10-25T10:33:39.145205419Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9e274a75-1fbb-4606-95f9-693ef033ddac name=/runtime.v1.ImageService/PullImage
	Oct 25 10:33:39 embed-certs-419185 crio[835]: time="2025-10-25T10:33:39.148130042Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 25 10:33:41 embed-certs-419185 crio[835]: time="2025-10-25T10:33:41.22949426Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=9e274a75-1fbb-4606-95f9-693ef033ddac name=/runtime.v1.ImageService/PullImage
	Oct 25 10:33:41 embed-certs-419185 crio[835]: time="2025-10-25T10:33:41.23050992Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9e384060-a011-4c05-80f8-8c316a1e8864 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:33:41 embed-certs-419185 crio[835]: time="2025-10-25T10:33:41.23353203Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b7665d9b-8397-4a3c-a9ad-3dede138d38d name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:33:41 embed-certs-419185 crio[835]: time="2025-10-25T10:33:41.240625461Z" level=info msg="Creating container: default/busybox/busybox" id=d98ffd41-1ab7-4034-8617-8248f2b6bddd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:33:41 embed-certs-419185 crio[835]: time="2025-10-25T10:33:41.240952382Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:33:41 embed-certs-419185 crio[835]: time="2025-10-25T10:33:41.249190549Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:33:41 embed-certs-419185 crio[835]: time="2025-10-25T10:33:41.249794479Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:33:41 embed-certs-419185 crio[835]: time="2025-10-25T10:33:41.271012542Z" level=info msg="Created container 56ec322dbf10ece419afdc1617cb323471a56d13fed7fc1c84d7f1b5e56189b9: default/busybox/busybox" id=d98ffd41-1ab7-4034-8617-8248f2b6bddd name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:33:41 embed-certs-419185 crio[835]: time="2025-10-25T10:33:41.275282241Z" level=info msg="Starting container: 56ec322dbf10ece419afdc1617cb323471a56d13fed7fc1c84d7f1b5e56189b9" id=883a8133-2c29-4d05-91ad-e647b7f97f07 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:33:41 embed-certs-419185 crio[835]: time="2025-10-25T10:33:41.280509088Z" level=info msg="Started container" PID=1789 containerID=56ec322dbf10ece419afdc1617cb323471a56d13fed7fc1c84d7f1b5e56189b9 description=default/busybox/busybox id=883a8133-2c29-4d05-91ad-e647b7f97f07 name=/runtime.v1.RuntimeService/StartContainer sandboxID=31ae64c654f5468d2af5665ca8e56f997df68cc69c4f7edf73bc38e874fa7dcf
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	56ec322dbf10e       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   31ae64c654f54       busybox                                      default
	271b4529c50e9       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      14 seconds ago       Running             coredns                   0                   787ced2898a11       coredns-66bc5c9577-q85rh                     kube-system
	f16d0908342b2       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      14 seconds ago       Running             storage-provisioner       0                   b492feab34d3b       storage-provisioner                          kube-system
	b7ff766b6cbb5       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      55 seconds ago       Running             kube-proxy                0                   31c61d0e2c16f       kube-proxy-2vqfc                             kube-system
	50b3e635abebc       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      55 seconds ago       Running             kindnet-cni               0                   7c744af3eb03e       kindnet-4ncnd                                kube-system
	d57557b84249f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   2bb0fdf79debe       kube-controller-manager-embed-certs-419185   kube-system
	f0e70e33fce36       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   328121fee3283       kube-apiserver-embed-certs-419185            kube-system
	c0c9c82050730       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   254fdc21cd065       kube-scheduler-embed-certs-419185            kube-system
	8fb5244db8aeb       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   4d9691302e5db       etcd-embed-certs-419185                      kube-system
	
	
	==> coredns [271b4529c50e9af5d1f965f1201c34bb0725c6a3a0c3997405cc5005008a0e00] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48693 - 56164 "HINFO IN 1104603295112339526.112675333791706310. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.02475488s
	
	
	==> describe nodes <==
	Name:               embed-certs-419185
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-419185
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=embed-certs-419185
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_32_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:32:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-419185
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:33:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:33:48 +0000   Sat, 25 Oct 2025 10:32:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:33:48 +0000   Sat, 25 Oct 2025 10:32:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:33:48 +0000   Sat, 25 Oct 2025 10:32:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:33:48 +0000   Sat, 25 Oct 2025 10:33:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-419185
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                ffdb98b4-012c-493a-a464-c37adcde7bd4
	  Boot ID:                    3729fb0b-b441-4078-a4f7-ae0fb40e9fa4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-q85rh                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     57s
	  kube-system                 etcd-embed-certs-419185                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         62s
	  kube-system                 kindnet-4ncnd                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      57s
	  kube-system                 kube-apiserver-embed-certs-419185             250m (12%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-controller-manager-embed-certs-419185    200m (10%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-proxy-2vqfc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-scheduler-embed-certs-419185             100m (5%)     0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 55s   kube-proxy       
	  Normal   Starting                 62s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s   kubelet          Node embed-certs-419185 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s   kubelet          Node embed-certs-419185 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s   kubelet          Node embed-certs-419185 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s   node-controller  Node embed-certs-419185 event: Registered Node embed-certs-419185 in Controller
	  Normal   NodeReady                15s   kubelet          Node embed-certs-419185 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct25 10:11] overlayfs: idmapped layers are currently not supported
	[Oct25 10:12] overlayfs: idmapped layers are currently not supported
	[Oct25 10:13] overlayfs: idmapped layers are currently not supported
	[Oct25 10:14] overlayfs: idmapped layers are currently not supported
	[Oct25 10:15] overlayfs: idmapped layers are currently not supported
	[ +16.057450] overlayfs: idmapped layers are currently not supported
	[Oct25 10:16] overlayfs: idmapped layers are currently not supported
	[ +24.917476] overlayfs: idmapped layers are currently not supported
	[Oct25 10:17] overlayfs: idmapped layers are currently not supported
	[ +27.290615] overlayfs: idmapped layers are currently not supported
	[Oct25 10:19] overlayfs: idmapped layers are currently not supported
	[Oct25 10:20] overlayfs: idmapped layers are currently not supported
	[Oct25 10:21] overlayfs: idmapped layers are currently not supported
	[Oct25 10:22] overlayfs: idmapped layers are currently not supported
	[ +31.000692] overlayfs: idmapped layers are currently not supported
	[Oct25 10:24] overlayfs: idmapped layers are currently not supported
	[Oct25 10:26] overlayfs: idmapped layers are currently not supported
	[Oct25 10:27] overlayfs: idmapped layers are currently not supported
	[Oct25 10:28] overlayfs: idmapped layers are currently not supported
	[ +15.077774] overlayfs: idmapped layers are currently not supported
	[Oct25 10:29] overlayfs: idmapped layers are currently not supported
	[Oct25 10:30] overlayfs: idmapped layers are currently not supported
	[Oct25 10:32] overlayfs: idmapped layers are currently not supported
	[ +37.840172] overlayfs: idmapped layers are currently not supported
	[Oct25 10:33] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8fb5244db8aeb8909c821f057431d2b9f13bc870be73f120d1ee0ecb6d1dfc90] <==
	{"level":"warn","ts":"2025-10-25T10:32:42.867962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:42.901766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:42.924321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:42.943686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:42.966159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:42.985194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:42.996909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:43.016955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:43.033727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:43.053284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:43.069517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:43.094116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:43.107250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:43.138440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:43.175258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:43.176087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:43.200946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:43.217409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:43.231206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:43.255960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:43.282921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:43.316559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:43.335849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:43.355690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:32:43.464727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38164","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:33:50 up  2:16,  0 user,  load average: 3.39, 3.61, 3.13
	Linux embed-certs-419185 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [50b3e635abebc8cb9da1b50c85c4d066cfcbfb63a2bf02f34c0d5649da74e64b] <==
	I1025 10:32:54.476492       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:32:54.476741       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1025 10:32:54.476868       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:32:54.476889       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:32:54.476904       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:32:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:32:54.680485       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:32:54.680591       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:32:54.680630       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:32:54.680970       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 10:33:24.680571       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 10:33:24.680571       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 10:33:24.681816       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 10:33:24.681826       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1025 10:33:26.081763       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:33:26.081866       1 metrics.go:72] Registering metrics
	I1025 10:33:26.081935       1 controller.go:711] "Syncing nftables rules"
	I1025 10:33:34.684949       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 10:33:34.685057       1 main.go:301] handling current node
	I1025 10:33:44.680604       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 10:33:44.680703       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f0e70e33fce36c359e206398e0679cbbef6283173aa8a80590d68adb4bf20960] <==
	I1025 10:32:44.531371       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 10:32:44.533602       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:32:44.563030       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:32:44.563092       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1025 10:32:44.573004       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 10:32:44.573390       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1025 10:32:44.576716       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:32:44.720162       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:32:45.129282       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 10:32:45.150435       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 10:32:45.150466       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:32:46.121075       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:32:46.187305       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:32:46.345562       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:32:46.355648       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 10:32:46.366858       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1025 10:32:46.368106       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:32:46.376024       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:32:47.227957       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:32:47.278064       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 10:32:47.293943       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 10:32:52.061609       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1025 10:32:52.510076       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:32:52.616030       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:32:52.670535       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [d57557b84249f988971c6fa98106f6ef87adba4e0c25eb1630371d558ff90936] <==
	I1025 10:32:51.359587       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:32:51.363231       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 10:32:51.364272       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-419185" podCIDRs=["10.244.0.0/24"]
	I1025 10:32:51.372479       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 10:32:51.384785       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 10:32:51.386145       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 10:32:51.389923       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 10:32:51.391084       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 10:32:51.391302       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 10:32:51.391731       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 10:32:51.392212       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 10:32:51.392396       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 10:32:51.392524       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 10:32:51.392804       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 10:32:51.393004       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 10:32:51.393925       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1025 10:32:51.394482       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 10:32:51.398641       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:32:51.400974       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 10:32:51.403291       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 10:32:51.412235       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:32:51.439973       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:32:51.439999       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 10:32:51.440007       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 10:33:36.358416       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [b7ff766b6cbb53294c394e345e42206e4319a00ae2ebaaf7d841fdfa9cf99769] <==
	I1025 10:32:54.467051       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:32:54.553025       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:32:54.653544       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:32:54.653645       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1025 10:32:54.653747       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:32:54.683776       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:32:54.683929       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:32:54.687713       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:32:54.688261       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:32:54.688448       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:32:54.690114       1 config.go:200] "Starting service config controller"
	I1025 10:32:54.690208       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:32:54.690262       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:32:54.690305       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:32:54.692211       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:32:54.692270       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:32:54.695917       1 config.go:309] "Starting node config controller"
	I1025 10:32:54.695981       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:32:54.696047       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:32:54.790827       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 10:32:54.790977       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 10:32:54.792959       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [c0c9c82050730c84a65868cf7cc7eb6bd65ba19ee31e9b86de667fb7c3b66d4d] <==
	E1025 10:32:44.434208       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 10:32:44.439505       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1025 10:32:44.447449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 10:32:44.447627       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 10:32:44.447750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 10:32:44.447909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 10:32:44.448013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 10:32:44.448123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 10:32:45.262775       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 10:32:45.312842       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 10:32:45.334484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 10:32:45.346290       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 10:32:45.361873       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 10:32:45.387659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 10:32:45.468240       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 10:32:45.488241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 10:32:45.504602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 10:32:45.519246       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 10:32:45.540305       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 10:32:45.605980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 10:32:45.654085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1025 10:32:45.730838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 10:32:45.744550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 10:32:45.745912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1025 10:32:47.997219       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:32:52 embed-certs-419185 kubelet[1306]: I1025 10:32:52.256296    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs2hk\" (UniqueName: \"kubernetes.io/projected/9b8b587a-0b0d-4176-a9b1-167e6fc8b1e7-kube-api-access-qs2hk\") pod \"kube-proxy-2vqfc\" (UID: \"9b8b587a-0b0d-4176-a9b1-167e6fc8b1e7\") " pod="kube-system/kube-proxy-2vqfc"
	Oct 25 10:32:52 embed-certs-419185 kubelet[1306]: E1025 10:32:52.276581    1306 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:embed-certs-419185\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-419185' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 25 10:32:52 embed-certs-419185 kubelet[1306]: E1025 10:32:52.276655    1306 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-4ncnd\" is forbidden: User \"system:node:embed-certs-419185\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'embed-certs-419185' and this object" podUID="1b443cbc-f209-4f7f-af12-0461716bb2d0" pod="kube-system/kindnet-4ncnd"
	Oct 25 10:32:53 embed-certs-419185 kubelet[1306]: E1025 10:32:53.358299    1306 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Oct 25 10:32:53 embed-certs-419185 kubelet[1306]: E1025 10:32:53.358870    1306 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9b8b587a-0b0d-4176-a9b1-167e6fc8b1e7-kube-proxy podName:9b8b587a-0b0d-4176-a9b1-167e6fc8b1e7 nodeName:}" failed. No retries permitted until 2025-10-25 10:32:53.858828585 +0000 UTC m=+6.789001884 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/9b8b587a-0b0d-4176-a9b1-167e6fc8b1e7-kube-proxy") pod "kube-proxy-2vqfc" (UID: "9b8b587a-0b0d-4176-a9b1-167e6fc8b1e7") : failed to sync configmap cache: timed out waiting for the condition
	Oct 25 10:32:53 embed-certs-419185 kubelet[1306]: E1025 10:32:53.511683    1306 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 25 10:32:53 embed-certs-419185 kubelet[1306]: E1025 10:32:53.511821    1306 projected.go:196] Error preparing data for projected volume kube-api-access-qs2hk for pod kube-system/kube-proxy-2vqfc: failed to sync configmap cache: timed out waiting for the condition
	Oct 25 10:32:53 embed-certs-419185 kubelet[1306]: E1025 10:32:53.511934    1306 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9b8b587a-0b0d-4176-a9b1-167e6fc8b1e7-kube-api-access-qs2hk podName:9b8b587a-0b0d-4176-a9b1-167e6fc8b1e7 nodeName:}" failed. No retries permitted until 2025-10-25 10:32:54.011903808 +0000 UTC m=+6.942077108 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qs2hk" (UniqueName: "kubernetes.io/projected/9b8b587a-0b0d-4176-a9b1-167e6fc8b1e7-kube-api-access-qs2hk") pod "kube-proxy-2vqfc" (UID: "9b8b587a-0b0d-4176-a9b1-167e6fc8b1e7") : failed to sync configmap cache: timed out waiting for the condition
	Oct 25 10:32:53 embed-certs-419185 kubelet[1306]: E1025 10:32:53.526264    1306 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 25 10:32:53 embed-certs-419185 kubelet[1306]: E1025 10:32:53.526320    1306 projected.go:196] Error preparing data for projected volume kube-api-access-vngll for pod kube-system/kindnet-4ncnd: failed to sync configmap cache: timed out waiting for the condition
	Oct 25 10:32:53 embed-certs-419185 kubelet[1306]: E1025 10:32:53.526409    1306 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1b443cbc-f209-4f7f-af12-0461716bb2d0-kube-api-access-vngll podName:1b443cbc-f209-4f7f-af12-0461716bb2d0 nodeName:}" failed. No retries permitted until 2025-10-25 10:32:54.026386082 +0000 UTC m=+6.956559382 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vngll" (UniqueName: "kubernetes.io/projected/1b443cbc-f209-4f7f-af12-0461716bb2d0-kube-api-access-vngll") pod "kindnet-4ncnd" (UID: "1b443cbc-f209-4f7f-af12-0461716bb2d0") : failed to sync configmap cache: timed out waiting for the condition
	Oct 25 10:32:54 embed-certs-419185 kubelet[1306]: I1025 10:32:54.087777    1306 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 25 10:32:54 embed-certs-419185 kubelet[1306]: W1025 10:32:54.361577    1306 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/1fda185b5ef1eb2faf4fa928e32967ade3a0a627d5653f4c7f4c57d474b9fefa/crio-31c61d0e2c16fdad43fa87a0691091ce62b6bf5202ab93222b859965a0ba27fa WatchSource:0}: Error finding container 31c61d0e2c16fdad43fa87a0691091ce62b6bf5202ab93222b859965a0ba27fa: Status 404 returned error can't find the container with id 31c61d0e2c16fdad43fa87a0691091ce62b6bf5202ab93222b859965a0ba27fa
	Oct 25 10:32:55 embed-certs-419185 kubelet[1306]: I1025 10:32:55.369581    1306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-4ncnd" podStartSLOduration=3.369558631 podStartE2EDuration="3.369558631s" podCreationTimestamp="2025-10-25 10:32:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:32:55.349167769 +0000 UTC m=+8.279341085" watchObservedRunningTime="2025-10-25 10:32:55.369558631 +0000 UTC m=+8.299731931"
	Oct 25 10:32:55 embed-certs-419185 kubelet[1306]: I1025 10:32:55.369934    1306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2vqfc" podStartSLOduration=3.369926726 podStartE2EDuration="3.369926726s" podCreationTimestamp="2025-10-25 10:32:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:32:55.368836808 +0000 UTC m=+8.299010116" watchObservedRunningTime="2025-10-25 10:32:55.369926726 +0000 UTC m=+8.300100042"
	Oct 25 10:33:34 embed-certs-419185 kubelet[1306]: I1025 10:33:34.922747    1306 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 25 10:33:35 embed-certs-419185 kubelet[1306]: I1025 10:33:35.095115    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/662f0cd5-ae79-463a-8a7a-f84ef27d6fee-tmp\") pod \"storage-provisioner\" (UID: \"662f0cd5-ae79-463a-8a7a-f84ef27d6fee\") " pod="kube-system/storage-provisioner"
	Oct 25 10:33:35 embed-certs-419185 kubelet[1306]: I1025 10:33:35.095220    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e4d97f26-45e9-46af-a009-111b0a00784f-config-volume\") pod \"coredns-66bc5c9577-q85rh\" (UID: \"e4d97f26-45e9-46af-a009-111b0a00784f\") " pod="kube-system/coredns-66bc5c9577-q85rh"
	Oct 25 10:33:35 embed-certs-419185 kubelet[1306]: I1025 10:33:35.095244    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nkgb\" (UniqueName: \"kubernetes.io/projected/e4d97f26-45e9-46af-a009-111b0a00784f-kube-api-access-8nkgb\") pod \"coredns-66bc5c9577-q85rh\" (UID: \"e4d97f26-45e9-46af-a009-111b0a00784f\") " pod="kube-system/coredns-66bc5c9577-q85rh"
	Oct 25 10:33:35 embed-certs-419185 kubelet[1306]: I1025 10:33:35.095268    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v48bc\" (UniqueName: \"kubernetes.io/projected/662f0cd5-ae79-463a-8a7a-f84ef27d6fee-kube-api-access-v48bc\") pod \"storage-provisioner\" (UID: \"662f0cd5-ae79-463a-8a7a-f84ef27d6fee\") " pod="kube-system/storage-provisioner"
	Oct 25 10:33:35 embed-certs-419185 kubelet[1306]: W1025 10:33:35.350675    1306 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/1fda185b5ef1eb2faf4fa928e32967ade3a0a627d5653f4c7f4c57d474b9fefa/crio-787ced2898a11546034c54acdc322f3b743c649dac9986a928d162d35cd18b0e WatchSource:0}: Error finding container 787ced2898a11546034c54acdc322f3b743c649dac9986a928d162d35cd18b0e: Status 404 returned error can't find the container with id 787ced2898a11546034c54acdc322f3b743c649dac9986a928d162d35cd18b0e
	Oct 25 10:33:36 embed-certs-419185 kubelet[1306]: I1025 10:33:36.523954    1306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-q85rh" podStartSLOduration=44.523937222 podStartE2EDuration="44.523937222s" podCreationTimestamp="2025-10-25 10:32:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:33:36.523585604 +0000 UTC m=+49.453758920" watchObservedRunningTime="2025-10-25 10:33:36.523937222 +0000 UTC m=+49.454110539"
	Oct 25 10:33:38 embed-certs-419185 kubelet[1306]: I1025 10:33:38.785330    1306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=45.785309859 podStartE2EDuration="45.785309859s" podCreationTimestamp="2025-10-25 10:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:33:36.60269836 +0000 UTC m=+49.532871668" watchObservedRunningTime="2025-10-25 10:33:38.785309859 +0000 UTC m=+51.715483167"
	Oct 25 10:33:38 embed-certs-419185 kubelet[1306]: I1025 10:33:38.928685    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6stmf\" (UniqueName: \"kubernetes.io/projected/8c21ab0b-2754-4861-96bc-2019ef1c2e7d-kube-api-access-6stmf\") pod \"busybox\" (UID: \"8c21ab0b-2754-4861-96bc-2019ef1c2e7d\") " pod="default/busybox"
	Oct 25 10:33:39 embed-certs-419185 kubelet[1306]: W1025 10:33:39.140033    1306 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/1fda185b5ef1eb2faf4fa928e32967ade3a0a627d5653f4c7f4c57d474b9fefa/crio-31ae64c654f5468d2af5665ca8e56f997df68cc69c4f7edf73bc38e874fa7dcf WatchSource:0}: Error finding container 31ae64c654f5468d2af5665ca8e56f997df68cc69c4f7edf73bc38e874fa7dcf: Status 404 returned error can't find the container with id 31ae64c654f5468d2af5665ca8e56f997df68cc69c4f7edf73bc38e874fa7dcf
	
	
	==> storage-provisioner [f16d0908342b22c703bbc73be1cdf661532d9a92eaaefba739d52cf58e2e0126] <==
	I1025 10:33:35.442425       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 10:33:35.492008       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 10:33:35.492073       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 10:33:35.604580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:33:35.658412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 10:33:35.658661       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 10:33:35.659250       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-419185_433c709a-d30a-4b02-aa75-eb47e6118e73!
	I1025 10:33:35.659090       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"24dcc85a-2e1b-4115-b38c-8d923951b052", APIVersion:"v1", ResourceVersion:"423", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-419185_433c709a-d30a-4b02-aa75-eb47e6118e73 became leader
	W1025 10:33:35.687235       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:33:35.693060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 10:33:35.760175       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-419185_433c709a-d30a-4b02-aa75-eb47e6118e73!
	W1025 10:33:37.705781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:33:37.712990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:33:39.721590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:33:39.725817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:33:41.729384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:33:41.737116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:33:43.743367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:33:43.749013       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:33:45.752568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:33:45.757230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:33:47.759911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:33:47.764390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:33:49.767803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:33:49.779016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-419185 -n embed-certs-419185
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-419185 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-204074 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-204074 --alsologtostderr -v=1: exit status 80 (2.102165718s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-204074 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 10:34:28.777697  490614 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:34:28.777867  490614 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:34:28.777874  490614 out.go:374] Setting ErrFile to fd 2...
	I1025 10:34:28.777878  490614 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:34:28.778154  490614 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 10:34:28.778429  490614 out.go:368] Setting JSON to false
	I1025 10:34:28.778453  490614 mustload.go:65] Loading cluster: default-k8s-diff-port-204074
	I1025 10:34:28.778839  490614 config.go:182] Loaded profile config "default-k8s-diff-port-204074": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:34:28.779342  490614 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-204074 --format={{.State.Status}}
	I1025 10:34:28.798905  490614 host.go:66] Checking if "default-k8s-diff-port-204074" exists ...
	I1025 10:34:28.799285  490614 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:34:28.900321  490614 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-25 10:34:28.888375706 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:34:28.901389  490614 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-204074 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1025 10:34:28.904829  490614 out.go:179] * Pausing node default-k8s-diff-port-204074 ... 
	I1025 10:34:28.908678  490614 host.go:66] Checking if "default-k8s-diff-port-204074" exists ...
	I1025 10:34:28.909024  490614 ssh_runner.go:195] Run: systemctl --version
	I1025 10:34:28.909078  490614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-204074
	I1025 10:34:28.929114  490614 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33442 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/default-k8s-diff-port-204074/id_rsa Username:docker}
	I1025 10:34:29.055296  490614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:34:29.074399  490614 pause.go:52] kubelet running: true
	I1025 10:34:29.074470  490614 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:34:29.434520  490614 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:34:29.434609  490614 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:34:29.525798  490614 cri.go:89] found id: "db21d161dfeef87ab0f7be598156f8aef8912dc979c9d322d68c986b0d00d2c6"
	I1025 10:34:29.525872  490614 cri.go:89] found id: "493310f9ab129a6e1d6281430845b4e12fbe0244899d8780b7e4d8dca312849b"
	I1025 10:34:29.525891  490614 cri.go:89] found id: "f79c465a2b069953f8a630b74e3dc39ad7ac142a9b4f29869e4868e73798c34b"
	I1025 10:34:29.525915  490614 cri.go:89] found id: "86929effcce559f05f40fcb87fb161dcf0cc18a45ed38b458b52dd1a4a6ce189"
	I1025 10:34:29.525955  490614 cri.go:89] found id: "b1bd8af17762678ba6f7830c709ab99400853ea3f02ac350b0be0e566844077c"
	I1025 10:34:29.525979  490614 cri.go:89] found id: "4ecd5c6991209a440ef676eded1a237dc4635cc52d88167118cd3ff569d669ed"
	I1025 10:34:29.526000  490614 cri.go:89] found id: "802d4fb83a2b952f13deb4266ef1896d827f97ddd11eae2520744994b5769f3e"
	I1025 10:34:29.526040  490614 cri.go:89] found id: "357c1c33e5336db1d9aacea8e98741b1db7d0a5f46bb4c275e97202edaa35037"
	I1025 10:34:29.526063  490614 cri.go:89] found id: "cf19925569a9e3157327f48321ecad645bed37c06789fbc66df79fd9cf9c8310"
	I1025 10:34:29.526088  490614 cri.go:89] found id: "0203f322602e7be00d05eff27792fd64d0786bf638a9f1139e07118ba2b1b674"
	I1025 10:34:29.526124  490614 cri.go:89] found id: "44c36eb0d3af1c70f460f7ff95b889938f58449578c41fdc1a6f5a428c39018d"
	I1025 10:34:29.526148  490614 cri.go:89] found id: ""
	I1025 10:34:29.526230  490614 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:34:29.548170  490614 retry.go:31] will retry after 147.868988ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:34:29Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:34:29.696596  490614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:34:29.711029  490614 pause.go:52] kubelet running: false
	I1025 10:34:29.711108  490614 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:34:29.943424  490614 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:34:29.943525  490614 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:34:30.137298  490614 cri.go:89] found id: "db21d161dfeef87ab0f7be598156f8aef8912dc979c9d322d68c986b0d00d2c6"
	I1025 10:34:30.137373  490614 cri.go:89] found id: "493310f9ab129a6e1d6281430845b4e12fbe0244899d8780b7e4d8dca312849b"
	I1025 10:34:30.137395  490614 cri.go:89] found id: "f79c465a2b069953f8a630b74e3dc39ad7ac142a9b4f29869e4868e73798c34b"
	I1025 10:34:30.137421  490614 cri.go:89] found id: "86929effcce559f05f40fcb87fb161dcf0cc18a45ed38b458b52dd1a4a6ce189"
	I1025 10:34:30.137458  490614 cri.go:89] found id: "b1bd8af17762678ba6f7830c709ab99400853ea3f02ac350b0be0e566844077c"
	I1025 10:34:30.137488  490614 cri.go:89] found id: "4ecd5c6991209a440ef676eded1a237dc4635cc52d88167118cd3ff569d669ed"
	I1025 10:34:30.137513  490614 cri.go:89] found id: "802d4fb83a2b952f13deb4266ef1896d827f97ddd11eae2520744994b5769f3e"
	I1025 10:34:30.137536  490614 cri.go:89] found id: "357c1c33e5336db1d9aacea8e98741b1db7d0a5f46bb4c275e97202edaa35037"
	I1025 10:34:30.137570  490614 cri.go:89] found id: "cf19925569a9e3157327f48321ecad645bed37c06789fbc66df79fd9cf9c8310"
	I1025 10:34:30.137597  490614 cri.go:89] found id: "0203f322602e7be00d05eff27792fd64d0786bf638a9f1139e07118ba2b1b674"
	I1025 10:34:30.137615  490614 cri.go:89] found id: "44c36eb0d3af1c70f460f7ff95b889938f58449578c41fdc1a6f5a428c39018d"
	I1025 10:34:30.137653  490614 cri.go:89] found id: ""
	I1025 10:34:30.137773  490614 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:34:30.162182  490614 retry.go:31] will retry after 219.719206ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:34:30Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:34:30.383411  490614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:34:30.399967  490614 pause.go:52] kubelet running: false
	I1025 10:34:30.400034  490614 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:34:30.657328  490614 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:34:30.657416  490614 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:34:30.768448  490614 cri.go:89] found id: "db21d161dfeef87ab0f7be598156f8aef8912dc979c9d322d68c986b0d00d2c6"
	I1025 10:34:30.768509  490614 cri.go:89] found id: "493310f9ab129a6e1d6281430845b4e12fbe0244899d8780b7e4d8dca312849b"
	I1025 10:34:30.768538  490614 cri.go:89] found id: "f79c465a2b069953f8a630b74e3dc39ad7ac142a9b4f29869e4868e73798c34b"
	I1025 10:34:30.768560  490614 cri.go:89] found id: "86929effcce559f05f40fcb87fb161dcf0cc18a45ed38b458b52dd1a4a6ce189"
	I1025 10:34:30.768594  490614 cri.go:89] found id: "b1bd8af17762678ba6f7830c709ab99400853ea3f02ac350b0be0e566844077c"
	I1025 10:34:30.768618  490614 cri.go:89] found id: "4ecd5c6991209a440ef676eded1a237dc4635cc52d88167118cd3ff569d669ed"
	I1025 10:34:30.768638  490614 cri.go:89] found id: "802d4fb83a2b952f13deb4266ef1896d827f97ddd11eae2520744994b5769f3e"
	I1025 10:34:30.768673  490614 cri.go:89] found id: "357c1c33e5336db1d9aacea8e98741b1db7d0a5f46bb4c275e97202edaa35037"
	I1025 10:34:30.768694  490614 cri.go:89] found id: "cf19925569a9e3157327f48321ecad645bed37c06789fbc66df79fd9cf9c8310"
	I1025 10:34:30.768716  490614 cri.go:89] found id: "0203f322602e7be00d05eff27792fd64d0786bf638a9f1139e07118ba2b1b674"
	I1025 10:34:30.768737  490614 cri.go:89] found id: "44c36eb0d3af1c70f460f7ff95b889938f58449578c41fdc1a6f5a428c39018d"
	I1025 10:34:30.768771  490614 cri.go:89] found id: ""
	I1025 10:34:30.768851  490614 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:34:30.786444  490614 out.go:203] 
	W1025 10:34:30.790183  490614 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:34:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:34:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 10:34:30.790267  490614 out.go:285] * 
	* 
	W1025 10:34:30.797745  490614 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 10:34:30.800998  490614 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-204074 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-204074
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-204074:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "114adef2e3f9f4d970639b5c8a68c00b64371efcafc97f0bb2c3652589f1f63a",
	        "Created": "2025-10-25T10:31:40.749344043Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 485613,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:33:26.781658202Z",
	            "FinishedAt": "2025-10-25T10:33:25.93631735Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/114adef2e3f9f4d970639b5c8a68c00b64371efcafc97f0bb2c3652589f1f63a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/114adef2e3f9f4d970639b5c8a68c00b64371efcafc97f0bb2c3652589f1f63a/hostname",
	        "HostsPath": "/var/lib/docker/containers/114adef2e3f9f4d970639b5c8a68c00b64371efcafc97f0bb2c3652589f1f63a/hosts",
	        "LogPath": "/var/lib/docker/containers/114adef2e3f9f4d970639b5c8a68c00b64371efcafc97f0bb2c3652589f1f63a/114adef2e3f9f4d970639b5c8a68c00b64371efcafc97f0bb2c3652589f1f63a-json.log",
	        "Name": "/default-k8s-diff-port-204074",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-204074:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-204074",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "114adef2e3f9f4d970639b5c8a68c00b64371efcafc97f0bb2c3652589f1f63a",
	                "LowerDir": "/var/lib/docker/overlay2/e85a7252b6645d35aa69af6f246969581b36f8fafff02d78082a3757e2e49a13-init/diff:/var/lib/docker/overlay2/56244b4f60319ba406f5ebe406da3f45bd5764c167111669ae2d84794cf94518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e85a7252b6645d35aa69af6f246969581b36f8fafff02d78082a3757e2e49a13/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e85a7252b6645d35aa69af6f246969581b36f8fafff02d78082a3757e2e49a13/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e85a7252b6645d35aa69af6f246969581b36f8fafff02d78082a3757e2e49a13/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-204074",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-204074/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-204074",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-204074",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-204074",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "57e3829456cf5e3a9fee38866f42b16ee866689a1529df04fb657c25fb826087",
	            "SandboxKey": "/var/run/docker/netns/57e3829456cf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-204074": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:78:66:b3:5d:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a8d6d82e4f1c3e18dd593c28bd34ec865e52f7ca53dce62df012fba5b98ee7a9",
	                    "EndpointID": "6aad9dbd23caa265551ed329573b22679945a6df1dc8f7435be97969324b4e8e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-204074",
	                        "114adef2e3f9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-204074 -n default-k8s-diff-port-204074
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-204074 -n default-k8s-diff-port-204074: exit status 2 (519.688303ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-204074 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-204074 logs -n 25: (1.838331416s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cert-options-506318 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-506318          │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:28 UTC │
	│ delete  │ -p cert-options-506318                                                                                                                                                                                                                        │ cert-options-506318          │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:28 UTC │
	│ start   │ -p old-k8s-version-610853 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:29 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-610853 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:30 UTC │                     │
	│ stop    │ -p old-k8s-version-610853 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:30 UTC │ 25 Oct 25 10:30 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-610853 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:30 UTC │ 25 Oct 25 10:30 UTC │
	│ start   │ -p old-k8s-version-610853 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:30 UTC │ 25 Oct 25 10:31 UTC │
	│ image   │ old-k8s-version-610853 image list --format=json                                                                                                                                                                                               │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │ 25 Oct 25 10:31 UTC │
	│ pause   │ -p old-k8s-version-610853 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │                     │
	│ delete  │ -p old-k8s-version-610853                                                                                                                                                                                                                     │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │ 25 Oct 25 10:31 UTC │
	│ delete  │ -p old-k8s-version-610853                                                                                                                                                                                                                     │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │ 25 Oct 25 10:31 UTC │
	│ start   │ -p default-k8s-diff-port-204074 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │ 25 Oct 25 10:33 UTC │
	│ start   │ -p cert-expiration-313068 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-313068       │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │ 25 Oct 25 10:32 UTC │
	│ delete  │ -p cert-expiration-313068                                                                                                                                                                                                                     │ cert-expiration-313068       │ jenkins │ v1.37.0 │ 25 Oct 25 10:32 UTC │ 25 Oct 25 10:32 UTC │
	│ start   │ -p embed-certs-419185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:32 UTC │ 25 Oct 25 10:33 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-204074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-204074 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │ 25 Oct 25 10:33 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-204074 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │ 25 Oct 25 10:33 UTC │
	│ start   │ -p default-k8s-diff-port-204074 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │ 25 Oct 25 10:34 UTC │
	│ addons  │ enable metrics-server -p embed-certs-419185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │                     │
	│ stop    │ -p embed-certs-419185 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │ 25 Oct 25 10:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-419185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ start   │ -p embed-certs-419185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │                     │
	│ image   │ default-k8s-diff-port-204074 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ pause   │ -p default-k8s-diff-port-204074 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:34:04
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:34:04.047712  488429 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:34:04.047855  488429 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:34:04.047867  488429 out.go:374] Setting ErrFile to fd 2...
	I1025 10:34:04.047886  488429 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:34:04.048179  488429 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 10:34:04.048587  488429 out.go:368] Setting JSON to false
	I1025 10:34:04.049585  488429 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8194,"bootTime":1761380250,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1025 10:34:04.049652  488429 start.go:141] virtualization:  
	I1025 10:34:04.052688  488429 out.go:179] * [embed-certs-419185] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:34:04.056601  488429 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 10:34:04.056655  488429 notify.go:220] Checking for updates...
	I1025 10:34:04.062419  488429 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:34:04.065292  488429 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:34:04.068299  488429 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-292167/.minikube
	I1025 10:34:04.071447  488429 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:34:04.074328  488429 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:34:04.077589  488429 config.go:182] Loaded profile config "embed-certs-419185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:34:04.078189  488429 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:34:04.108883  488429 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:34:04.109009  488429 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:34:04.163951  488429 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:34:04.154845421 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:34:04.164075  488429 docker.go:318] overlay module found
	I1025 10:34:04.167281  488429 out.go:179] * Using the docker driver based on existing profile
	I1025 10:34:04.170109  488429 start.go:305] selected driver: docker
	I1025 10:34:04.170127  488429 start.go:925] validating driver "docker" against &{Name:embed-certs-419185 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-419185 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:34:04.170225  488429 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:34:04.171067  488429 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:34:04.232270  488429 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:34:04.223394737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:34:04.232658  488429 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:34:04.232689  488429 cni.go:84] Creating CNI manager for ""
	I1025 10:34:04.232753  488429 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:34:04.232794  488429 start.go:349] cluster config:
	{Name:embed-certs-419185 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-419185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:34:04.235860  488429 out.go:179] * Starting "embed-certs-419185" primary control-plane node in "embed-certs-419185" cluster
	I1025 10:34:04.238753  488429 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:34:04.241614  488429 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:34:04.244492  488429 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:34:04.244622  488429 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:34:04.244655  488429 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 10:34:04.244668  488429 cache.go:58] Caching tarball of preloaded images
	I1025 10:34:04.244738  488429 preload.go:233] Found /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:34:04.244753  488429 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:34:04.244859  488429 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/config.json ...
	I1025 10:34:04.265772  488429 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:34:04.265798  488429 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:34:04.265812  488429 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:34:04.265835  488429 start.go:360] acquireMachinesLock for embed-certs-419185: {Name:mk5a130bf45ea43a164134eaf1f0ed9a364dff5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:34:04.265900  488429 start.go:364] duration metric: took 35.93µs to acquireMachinesLock for "embed-certs-419185"
	I1025 10:34:04.265934  488429 start.go:96] Skipping create...Using existing machine configuration
	I1025 10:34:04.265943  488429 fix.go:54] fixHost starting: 
	I1025 10:34:04.266196  488429 cli_runner.go:164] Run: docker container inspect embed-certs-419185 --format={{.State.Status}}
	I1025 10:34:04.283235  488429 fix.go:112] recreateIfNeeded on embed-certs-419185: state=Stopped err=<nil>
	W1025 10:34:04.283275  488429 fix.go:138] unexpected machine state, will restart: <nil>
	W1025 10:34:02.703583  485483 pod_ready.go:104] pod "coredns-66bc5c9577-hwczp" is not "Ready", error: <nil>
	W1025 10:34:04.707014  485483 pod_ready.go:104] pod "coredns-66bc5c9577-hwczp" is not "Ready", error: <nil>
	I1025 10:34:04.286572  488429 out.go:252] * Restarting existing docker container for "embed-certs-419185" ...
	I1025 10:34:04.286662  488429 cli_runner.go:164] Run: docker start embed-certs-419185
	I1025 10:34:04.552453  488429 cli_runner.go:164] Run: docker container inspect embed-certs-419185 --format={{.State.Status}}
	I1025 10:34:04.572001  488429 kic.go:430] container "embed-certs-419185" state is running.
	I1025 10:34:04.572411  488429 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-419185
	I1025 10:34:04.596934  488429 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/config.json ...
	I1025 10:34:04.597175  488429 machine.go:93] provisionDockerMachine start ...
	I1025 10:34:04.597236  488429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-419185
	I1025 10:34:04.620779  488429 main.go:141] libmachine: Using SSH client type: native
	I1025 10:34:04.621102  488429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33447 <nil> <nil>}
	I1025 10:34:04.621112  488429 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:34:04.623726  488429 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43066->127.0.0.1:33447: read: connection reset by peer
	I1025 10:34:07.774855  488429 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-419185
	
	I1025 10:34:07.774885  488429 ubuntu.go:182] provisioning hostname "embed-certs-419185"
	I1025 10:34:07.774954  488429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-419185
	I1025 10:34:07.791944  488429 main.go:141] libmachine: Using SSH client type: native
	I1025 10:34:07.792254  488429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33447 <nil> <nil>}
	I1025 10:34:07.792271  488429 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-419185 && echo "embed-certs-419185" | sudo tee /etc/hostname
	I1025 10:34:07.954192  488429 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-419185
	
	I1025 10:34:07.954312  488429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-419185
	I1025 10:34:07.971767  488429 main.go:141] libmachine: Using SSH client type: native
	I1025 10:34:07.972103  488429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33447 <nil> <nil>}
	I1025 10:34:07.972127  488429 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-419185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-419185/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-419185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:34:08.123514  488429 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:34:08.123580  488429 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-292167/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-292167/.minikube}
	I1025 10:34:08.123634  488429 ubuntu.go:190] setting up certificates
	I1025 10:34:08.123663  488429 provision.go:84] configureAuth start
	I1025 10:34:08.123742  488429 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-419185
	I1025 10:34:08.140327  488429 provision.go:143] copyHostCerts
	I1025 10:34:08.140403  488429 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem, removing ...
	I1025 10:34:08.140426  488429 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem
	I1025 10:34:08.140520  488429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem (1082 bytes)
	I1025 10:34:08.140641  488429 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem, removing ...
	I1025 10:34:08.140652  488429 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem
	I1025 10:34:08.140689  488429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem (1123 bytes)
	I1025 10:34:08.140757  488429 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem, removing ...
	I1025 10:34:08.140766  488429 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem
	I1025 10:34:08.140790  488429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem (1675 bytes)
	I1025 10:34:08.140849  488429 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem org=jenkins.embed-certs-419185 san=[127.0.0.1 192.168.76.2 embed-certs-419185 localhost minikube]
	I1025 10:34:08.561140  488429 provision.go:177] copyRemoteCerts
	I1025 10:34:08.561215  488429 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:34:08.561261  488429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-419185
	I1025 10:34:08.578886  488429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33447 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/embed-certs-419185/id_rsa Username:docker}
	I1025 10:34:08.683009  488429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 10:34:08.704520  488429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 10:34:08.724093  488429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 10:34:08.741789  488429 provision.go:87] duration metric: took 618.096172ms to configureAuth
	I1025 10:34:08.741858  488429 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:34:08.742057  488429 config.go:182] Loaded profile config "embed-certs-419185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:34:08.742163  488429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-419185
	I1025 10:34:08.764004  488429 main.go:141] libmachine: Using SSH client type: native
	I1025 10:34:08.764342  488429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33447 <nil> <nil>}
	I1025 10:34:08.764364  488429 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:34:09.101488  488429 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:34:09.101508  488429 machine.go:96] duration metric: took 4.504323332s to provisionDockerMachine
	I1025 10:34:09.101519  488429 start.go:293] postStartSetup for "embed-certs-419185" (driver="docker")
	I1025 10:34:09.101530  488429 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:34:09.101602  488429 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:34:09.101645  488429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-419185
	I1025 10:34:09.123024  488429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33447 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/embed-certs-419185/id_rsa Username:docker}
	I1025 10:34:09.227577  488429 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:34:09.231126  488429 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:34:09.231178  488429 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:34:09.231192  488429 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/addons for local assets ...
	I1025 10:34:09.231244  488429 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/files for local assets ...
	I1025 10:34:09.231332  488429 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem -> 2940172.pem in /etc/ssl/certs
	I1025 10:34:09.231444  488429 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:34:09.239290  488429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 10:34:09.257811  488429 start.go:296] duration metric: took 156.276876ms for postStartSetup
	I1025 10:34:09.257904  488429 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:34:09.257956  488429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-419185
	I1025 10:34:09.275779  488429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33447 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/embed-certs-419185/id_rsa Username:docker}
	I1025 10:34:09.376359  488429 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:34:09.381628  488429 fix.go:56] duration metric: took 5.115676511s for fixHost
	I1025 10:34:09.381656  488429 start.go:83] releasing machines lock for "embed-certs-419185", held for 5.115738797s
	I1025 10:34:09.381748  488429 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-419185
	I1025 10:34:09.400392  488429 ssh_runner.go:195] Run: cat /version.json
	I1025 10:34:09.400445  488429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-419185
	I1025 10:34:09.400461  488429 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:34:09.400515  488429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-419185
	I1025 10:34:09.420725  488429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33447 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/embed-certs-419185/id_rsa Username:docker}
	I1025 10:34:09.437148  488429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33447 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/embed-certs-419185/id_rsa Username:docker}
	I1025 10:34:09.634831  488429 ssh_runner.go:195] Run: systemctl --version
	I1025 10:34:09.641448  488429 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:34:09.680722  488429 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:34:09.685185  488429 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:34:09.685254  488429 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:34:09.693834  488429 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:34:09.693859  488429 start.go:495] detecting cgroup driver to use...
	I1025 10:34:09.693891  488429 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:34:09.693939  488429 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:34:09.709943  488429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:34:09.723300  488429 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:34:09.723365  488429 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:34:09.739348  488429 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:34:09.752673  488429 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:34:09.887991  488429 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:34:10.063391  488429 docker.go:234] disabling docker service ...
	I1025 10:34:10.063459  488429 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:34:10.081621  488429 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:34:10.096682  488429 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:34:10.231667  488429 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:34:10.367123  488429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:34:10.381253  488429 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:34:10.399740  488429 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:34:10.399807  488429 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:34:10.410325  488429 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:34:10.410392  488429 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:34:10.420737  488429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:34:10.430394  488429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:34:10.440193  488429 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:34:10.448982  488429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:34:10.458278  488429 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:34:10.466914  488429 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:34:10.476009  488429 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:34:10.483602  488429 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:34:10.490997  488429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:34:10.619977  488429 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:34:10.751837  488429 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:34:10.751902  488429 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:34:10.755866  488429 start.go:563] Will wait 60s for crictl version
	I1025 10:34:10.755931  488429 ssh_runner.go:195] Run: which crictl
	I1025 10:34:10.759741  488429 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:34:10.789385  488429 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:34:10.789495  488429 ssh_runner.go:195] Run: crio --version
	I1025 10:34:10.818378  488429 ssh_runner.go:195] Run: crio --version
	I1025 10:34:10.849601  488429 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1025 10:34:07.203323  485483 pod_ready.go:104] pod "coredns-66bc5c9577-hwczp" is not "Ready", error: <nil>
	W1025 10:34:09.203710  485483 pod_ready.go:104] pod "coredns-66bc5c9577-hwczp" is not "Ready", error: <nil>
	W1025 10:34:11.203824  485483 pod_ready.go:104] pod "coredns-66bc5c9577-hwczp" is not "Ready", error: <nil>
	I1025 10:34:10.852478  488429 cli_runner.go:164] Run: docker network inspect embed-certs-419185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:34:10.873870  488429 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1025 10:34:10.877625  488429 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:34:10.887511  488429 kubeadm.go:883] updating cluster {Name:embed-certs-419185 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-419185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:34:10.887636  488429 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:34:10.887702  488429 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:34:10.921277  488429 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:34:10.921302  488429 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:34:10.921358  488429 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:34:10.956561  488429 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:34:10.956589  488429 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:34:10.956597  488429 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1025 10:34:10.956696  488429 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-419185 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-419185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:34:10.956783  488429 ssh_runner.go:195] Run: crio config
	I1025 10:34:11.037921  488429 cni.go:84] Creating CNI manager for ""
	I1025 10:34:11.037949  488429 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:34:11.037975  488429 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:34:11.037999  488429 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-419185 NodeName:embed-certs-419185 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:34:11.038137  488429 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-419185"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:34:11.038220  488429 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:34:11.046902  488429 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:34:11.046976  488429 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:34:11.054877  488429 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1025 10:34:11.068615  488429 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:34:11.082345  488429 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1025 10:34:11.096566  488429 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:34:11.100575  488429 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:34:11.115200  488429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:34:11.238879  488429 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:34:11.256295  488429 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185 for IP: 192.168.76.2
	I1025 10:34:11.256314  488429 certs.go:195] generating shared ca certs ...
	I1025 10:34:11.256330  488429 certs.go:227] acquiring lock for ca certs: {Name:mk9438893ca511bbb3aa6154ee7b6f94b409696d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:34:11.256475  488429 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key
	I1025 10:34:11.256524  488429 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key
	I1025 10:34:11.256536  488429 certs.go:257] generating profile certs ...
	I1025 10:34:11.256623  488429 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/client.key
	I1025 10:34:11.256687  488429 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/apiserver.key.627d90fe
	I1025 10:34:11.256738  488429 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/proxy-client.key
	I1025 10:34:11.256846  488429 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem (1338 bytes)
	W1025 10:34:11.256884  488429 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017_empty.pem, impossibly tiny 0 bytes
	I1025 10:34:11.256900  488429 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 10:34:11.256928  488429 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem (1082 bytes)
	I1025 10:34:11.256958  488429 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:34:11.256990  488429 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem (1675 bytes)
	I1025 10:34:11.257040  488429 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 10:34:11.257662  488429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:34:11.278358  488429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:34:11.296610  488429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:34:11.326873  488429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:34:11.355114  488429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1025 10:34:11.378410  488429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:34:11.412439  488429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:34:11.444134  488429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 10:34:11.464629  488429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /usr/share/ca-certificates/2940172.pem (1708 bytes)
	I1025 10:34:11.486908  488429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:34:11.514337  488429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem --> /usr/share/ca-certificates/294017.pem (1338 bytes)
	I1025 10:34:11.534318  488429 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:34:11.552059  488429 ssh_runner.go:195] Run: openssl version
	I1025 10:34:11.561095  488429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940172.pem && ln -fs /usr/share/ca-certificates/2940172.pem /etc/ssl/certs/2940172.pem"
	I1025 10:34:11.570646  488429 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940172.pem
	I1025 10:34:11.574521  488429 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:39 /usr/share/ca-certificates/2940172.pem
	I1025 10:34:11.574586  488429 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940172.pem
	I1025 10:34:11.626096  488429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940172.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:34:11.634709  488429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:34:11.643399  488429 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:34:11.647326  488429 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:34:11.647410  488429 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:34:11.702202  488429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:34:11.714977  488429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294017.pem && ln -fs /usr/share/ca-certificates/294017.pem /etc/ssl/certs/294017.pem"
	I1025 10:34:11.728693  488429 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294017.pem
	I1025 10:34:11.732801  488429 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:39 /usr/share/ca-certificates/294017.pem
	I1025 10:34:11.732871  488429 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294017.pem
	I1025 10:34:11.781021  488429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294017.pem /etc/ssl/certs/51391683.0"
	I1025 10:34:11.790842  488429 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:34:11.795326  488429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:34:11.839731  488429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:34:11.882576  488429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:34:11.934006  488429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:34:11.989758  488429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:34:12.038531  488429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:34:12.097773  488429 kubeadm.go:400] StartCluster: {Name:embed-certs-419185 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-419185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:34:12.097934  488429 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:34:12.098034  488429 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:34:12.144956  488429 cri.go:89] found id: "5d13bdf1233c74d26d6840451eeed0128e78110075227087c41d9d2ef0a3b0c1"
	I1025 10:34:12.145020  488429 cri.go:89] found id: "e175f67ced2debe1beebce72628c6856d49efdd71fbee71a4a521e5cb4728c33"
	I1025 10:34:12.145039  488429 cri.go:89] found id: ""
	I1025 10:34:12.145137  488429 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 10:34:12.159686  488429 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:34:12Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:34:12.159860  488429 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:34:12.183632  488429 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:34:12.183695  488429 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:34:12.183784  488429 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:34:12.207959  488429 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:34:12.208639  488429 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-419185" does not appear in /home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:34:12.208951  488429 kubeconfig.go:62] /home/jenkins/minikube-integration/21794-292167/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-419185" cluster setting kubeconfig missing "embed-certs-419185" context setting]
	I1025 10:34:12.209439  488429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/kubeconfig: {Name:mk3dba620aff9c31a3afd43d9db4d9ff5be75367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:34:12.211180  488429 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:34:12.232572  488429 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1025 10:34:12.232649  488429 kubeadm.go:601] duration metric: took 48.933333ms to restartPrimaryControlPlane
	I1025 10:34:12.232674  488429 kubeadm.go:402] duration metric: took 134.912586ms to StartCluster
	I1025 10:34:12.232716  488429 settings.go:142] acquiring lock: {Name:mkea67a33832f2ea491a4c745ccd836174c61655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:34:12.232793  488429 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:34:12.234096  488429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/kubeconfig: {Name:mk3dba620aff9c31a3afd43d9db4d9ff5be75367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:34:12.234385  488429 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:34:12.234881  488429 config.go:182] Loaded profile config "embed-certs-419185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:34:12.234836  488429 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:34:12.235026  488429 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-419185"
	I1025 10:34:12.235053  488429 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-419185"
	I1025 10:34:12.235086  488429 addons.go:69] Setting default-storageclass=true in profile "embed-certs-419185"
	I1025 10:34:12.235127  488429 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-419185"
	W1025 10:34:12.235093  488429 addons.go:247] addon storage-provisioner should already be in state true
	I1025 10:34:12.235264  488429 host.go:66] Checking if "embed-certs-419185" exists ...
	I1025 10:34:12.235574  488429 cli_runner.go:164] Run: docker container inspect embed-certs-419185 --format={{.State.Status}}
	I1025 10:34:12.235785  488429 cli_runner.go:164] Run: docker container inspect embed-certs-419185 --format={{.State.Status}}
	I1025 10:34:12.235055  488429 addons.go:69] Setting dashboard=true in profile "embed-certs-419185"
	I1025 10:34:12.236131  488429 addons.go:238] Setting addon dashboard=true in "embed-certs-419185"
	W1025 10:34:12.236140  488429 addons.go:247] addon dashboard should already be in state true
	I1025 10:34:12.236171  488429 host.go:66] Checking if "embed-certs-419185" exists ...
	I1025 10:34:12.236578  488429 cli_runner.go:164] Run: docker container inspect embed-certs-419185 --format={{.State.Status}}
	I1025 10:34:12.247203  488429 out.go:179] * Verifying Kubernetes components...
	I1025 10:34:12.250628  488429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:34:12.285395  488429 addons.go:238] Setting addon default-storageclass=true in "embed-certs-419185"
	W1025 10:34:12.285422  488429 addons.go:247] addon default-storageclass should already be in state true
	I1025 10:34:12.285447  488429 host.go:66] Checking if "embed-certs-419185" exists ...
	I1025 10:34:12.289209  488429 cli_runner.go:164] Run: docker container inspect embed-certs-419185 --format={{.State.Status}}
	I1025 10:34:12.296302  488429 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:34:12.300339  488429 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:34:12.300365  488429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:34:12.300450  488429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-419185
	I1025 10:34:12.315225  488429 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 10:34:12.319961  488429 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1025 10:34:12.322963  488429 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 10:34:12.322992  488429 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 10:34:12.323067  488429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-419185
	I1025 10:34:12.348544  488429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33447 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/embed-certs-419185/id_rsa Username:docker}
	I1025 10:34:12.350545  488429 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:34:12.350564  488429 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:34:12.350626  488429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-419185
	I1025 10:34:12.387704  488429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33447 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/embed-certs-419185/id_rsa Username:docker}
	I1025 10:34:12.403297  488429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33447 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/embed-certs-419185/id_rsa Username:docker}
	I1025 10:34:12.586037  488429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:34:12.648706  488429 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:34:12.712978  488429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:34:12.721654  488429 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 10:34:12.721679  488429 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 10:34:12.801231  488429 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 10:34:12.801312  488429 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 10:34:12.906177  488429 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 10:34:12.906202  488429 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 10:34:12.932006  488429 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 10:34:12.932034  488429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 10:34:12.953676  488429 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 10:34:12.953704  488429 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 10:34:12.978594  488429 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 10:34:12.978622  488429 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 10:34:13.001840  488429 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 10:34:13.001872  488429 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 10:34:13.025803  488429 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 10:34:13.025842  488429 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 10:34:13.042696  488429 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:34:13.042726  488429 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 10:34:13.062114  488429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1025 10:34:13.704217  485483 pod_ready.go:104] pod "coredns-66bc5c9577-hwczp" is not "Ready", error: <nil>
	I1025 10:34:15.203751  485483 pod_ready.go:94] pod "coredns-66bc5c9577-hwczp" is "Ready"
	I1025 10:34:15.203838  485483 pod_ready.go:86] duration metric: took 33.507226684s for pod "coredns-66bc5c9577-hwczp" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:15.210682  485483 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-204074" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:15.216305  485483 pod_ready.go:94] pod "etcd-default-k8s-diff-port-204074" is "Ready"
	I1025 10:34:15.216379  485483 pod_ready.go:86] duration metric: took 5.621001ms for pod "etcd-default-k8s-diff-port-204074" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:15.220897  485483 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-204074" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:15.226242  485483 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-204074" is "Ready"
	I1025 10:34:15.226318  485483 pod_ready.go:86] duration metric: took 5.3482ms for pod "kube-apiserver-default-k8s-diff-port-204074" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:15.229253  485483 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-204074" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:15.400249  485483 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-204074" is "Ready"
	I1025 10:34:15.400326  485483 pod_ready.go:86] duration metric: took 171.010844ms for pod "kube-controller-manager-default-k8s-diff-port-204074" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:15.600771  485483 pod_ready.go:83] waiting for pod "kube-proxy-qcgkj" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:16.000603  485483 pod_ready.go:94] pod "kube-proxy-qcgkj" is "Ready"
	I1025 10:34:16.000685  485483 pod_ready.go:86] duration metric: took 399.833798ms for pod "kube-proxy-qcgkj" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:16.200009  485483 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-204074" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:16.600184  485483 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-204074" is "Ready"
	I1025 10:34:16.600262  485483 pod_ready.go:86] duration metric: took 400.173139ms for pod "kube-scheduler-default-k8s-diff-port-204074" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:16.600291  485483 pod_ready.go:40] duration metric: took 34.90788081s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:34:16.704132  485483 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 10:34:16.707499  485483 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-204074" cluster and "default" namespace by default
	I1025 10:34:19.595027  488429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.008901879s)
	I1025 10:34:19.595089  488429 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.946312072s)
	I1025 10:34:19.595120  488429 node_ready.go:35] waiting up to 6m0s for node "embed-certs-419185" to be "Ready" ...
	I1025 10:34:19.595493  488429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.882492148s)
	I1025 10:34:19.643535  488429 node_ready.go:49] node "embed-certs-419185" is "Ready"
	I1025 10:34:19.643571  488429 node_ready.go:38] duration metric: took 48.432858ms for node "embed-certs-419185" to be "Ready" ...
	I1025 10:34:19.643584  488429 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:34:19.643655  488429 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:34:19.704585  488429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.64242637s)
	I1025 10:34:19.704755  488429 api_server.go:72] duration metric: took 7.470312236s to wait for apiserver process to appear ...
	I1025 10:34:19.704768  488429 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:34:19.704785  488429 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:34:19.709374  488429 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-419185 addons enable metrics-server
	
	I1025 10:34:19.712883  488429 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1025 10:34:19.716914  488429 addons.go:514] duration metric: took 7.482079988s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1025 10:34:19.726700  488429 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:34:19.726724  488429 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:34:20.205286  488429 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:34:20.214729  488429 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1025 10:34:20.215932  488429 api_server.go:141] control plane version: v1.34.1
	I1025 10:34:20.215963  488429 api_server.go:131] duration metric: took 511.188742ms to wait for apiserver health ...
	I1025 10:34:20.215974  488429 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:34:20.219625  488429 system_pods.go:59] 8 kube-system pods found
	I1025 10:34:20.219666  488429 system_pods.go:61] "coredns-66bc5c9577-q85rh" [e4d97f26-45e9-46af-a009-111b0a00784f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:34:20.219676  488429 system_pods.go:61] "etcd-embed-certs-419185" [e9e7d435-1b4b-478b-947e-0e301e8167af] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:34:20.219682  488429 system_pods.go:61] "kindnet-4ncnd" [1b443cbc-f209-4f7f-af12-0461716bb2d0] Running
	I1025 10:34:20.219689  488429 system_pods.go:61] "kube-apiserver-embed-certs-419185" [e60429f2-4545-479d-83f0-58c50867e833] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:34:20.219695  488429 system_pods.go:61] "kube-controller-manager-embed-certs-419185" [8822bf04-b788-4fa7-824c-6566f079e081] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:34:20.219701  488429 system_pods.go:61] "kube-proxy-2vqfc" [9b8b587a-0b0d-4176-a9b1-167e6fc8b1e7] Running
	I1025 10:34:20.219707  488429 system_pods.go:61] "kube-scheduler-embed-certs-419185" [2c573bf0-c9a1-4158-91e8-683bc73f60d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:34:20.219712  488429 system_pods.go:61] "storage-provisioner" [662f0cd5-ae79-463a-8a7a-f84ef27d6fee] Running
	I1025 10:34:20.219718  488429 system_pods.go:74] duration metric: took 3.737834ms to wait for pod list to return data ...
	I1025 10:34:20.219732  488429 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:34:20.222766  488429 default_sa.go:45] found service account: "default"
	I1025 10:34:20.222792  488429 default_sa.go:55] duration metric: took 3.053655ms for default service account to be created ...
	I1025 10:34:20.222802  488429 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:34:20.226385  488429 system_pods.go:86] 8 kube-system pods found
	I1025 10:34:20.226421  488429 system_pods.go:89] "coredns-66bc5c9577-q85rh" [e4d97f26-45e9-46af-a009-111b0a00784f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:34:20.226436  488429 system_pods.go:89] "etcd-embed-certs-419185" [e9e7d435-1b4b-478b-947e-0e301e8167af] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:34:20.226442  488429 system_pods.go:89] "kindnet-4ncnd" [1b443cbc-f209-4f7f-af12-0461716bb2d0] Running
	I1025 10:34:20.226457  488429 system_pods.go:89] "kube-apiserver-embed-certs-419185" [e60429f2-4545-479d-83f0-58c50867e833] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:34:20.226465  488429 system_pods.go:89] "kube-controller-manager-embed-certs-419185" [8822bf04-b788-4fa7-824c-6566f079e081] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:34:20.226474  488429 system_pods.go:89] "kube-proxy-2vqfc" [9b8b587a-0b0d-4176-a9b1-167e6fc8b1e7] Running
	I1025 10:34:20.226481  488429 system_pods.go:89] "kube-scheduler-embed-certs-419185" [2c573bf0-c9a1-4158-91e8-683bc73f60d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:34:20.226491  488429 system_pods.go:89] "storage-provisioner" [662f0cd5-ae79-463a-8a7a-f84ef27d6fee] Running
	I1025 10:34:20.226498  488429 system_pods.go:126] duration metric: took 3.690901ms to wait for k8s-apps to be running ...
	I1025 10:34:20.226511  488429 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:34:20.226571  488429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:34:20.241899  488429 system_svc.go:56] duration metric: took 15.378152ms WaitForService to wait for kubelet
	I1025 10:34:20.241930  488429 kubeadm.go:586] duration metric: took 8.00749383s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:34:20.241950  488429 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:34:20.244977  488429 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:34:20.245010  488429 node_conditions.go:123] node cpu capacity is 2
	I1025 10:34:20.245023  488429 node_conditions.go:105] duration metric: took 3.06826ms to run NodePressure ...
	I1025 10:34:20.245035  488429 start.go:241] waiting for startup goroutines ...
	I1025 10:34:20.245042  488429 start.go:246] waiting for cluster config update ...
	I1025 10:34:20.245053  488429 start.go:255] writing updated cluster config ...
	I1025 10:34:20.245349  488429 ssh_runner.go:195] Run: rm -f paused
	I1025 10:34:20.249463  488429 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:34:20.256066  488429 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-q85rh" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 10:34:22.262335  488429 pod_ready.go:104] pod "coredns-66bc5c9577-q85rh" is not "Ready", error: <nil>
	W1025 10:34:24.266945  488429 pod_ready.go:104] pod "coredns-66bc5c9577-q85rh" is not "Ready", error: <nil>
	W1025 10:34:26.768782  488429 pod_ready.go:104] pod "coredns-66bc5c9577-q85rh" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 25 10:34:19 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:19.394243164Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:34:19 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:19.410015834Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:34:19 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:19.410526521Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:34:19 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:19.432647982Z" level=info msg="Created container 0203f322602e7be00d05eff27792fd64d0786bf638a9f1139e07118ba2b1b674: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d7tbs/dashboard-metrics-scraper" id=45e51b3d-f8e3-401f-8ea1-1add4eca70c6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:34:19 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:19.43690928Z" level=info msg="Starting container: 0203f322602e7be00d05eff27792fd64d0786bf638a9f1139e07118ba2b1b674" id=cfe5878a-1b52-44a4-af27-efa5f9d8419f name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:34:19 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:19.444247322Z" level=info msg="Started container" PID=1668 containerID=0203f322602e7be00d05eff27792fd64d0786bf638a9f1139e07118ba2b1b674 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d7tbs/dashboard-metrics-scraper id=cfe5878a-1b52-44a4-af27-efa5f9d8419f name=/runtime.v1.RuntimeService/StartContainer sandboxID=0fa887aaa0c6a33e0eb2a41248e3ab52fdb6ca301370f4e4685bb242b4cc5865
	Oct 25 10:34:19 default-k8s-diff-port-204074 conmon[1666]: conmon 0203f322602e7be00d05 <ninfo>: container 1668 exited with status 1
	Oct 25 10:34:19 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:19.683566272Z" level=info msg="Removing container: fd5906f5a827a0fa41988f8c17cb5c8919d14478fb01d6184ff0560c02f0fd55" id=db5b3092-7ee2-4451-8e19-70b931fe9d96 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:34:19 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:19.698916009Z" level=info msg="Error loading conmon cgroup of container fd5906f5a827a0fa41988f8c17cb5c8919d14478fb01d6184ff0560c02f0fd55: cgroup deleted" id=db5b3092-7ee2-4451-8e19-70b931fe9d96 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:34:19 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:19.713562145Z" level=info msg="Removed container fd5906f5a827a0fa41988f8c17cb5c8919d14478fb01d6184ff0560c02f0fd55: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d7tbs/dashboard-metrics-scraper" id=db5b3092-7ee2-4451-8e19-70b931fe9d96 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:34:20 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:20.705712981Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:34:20 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:20.709886441Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:34:20 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:20.710096898Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:34:20 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:20.710131746Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:34:20 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:20.713662086Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:34:20 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:20.713698468Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:34:20 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:20.713724561Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:34:20 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:20.716829975Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:34:20 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:20.716859448Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:34:20 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:20.716882874Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:34:20 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:20.721595896Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:34:20 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:20.721831666Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:34:20 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:20.721942158Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:34:20 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:20.727344357Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:34:20 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:20.727376226Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	0203f322602e7       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           12 seconds ago      Exited              dashboard-metrics-scraper   2                   0fa887aaa0c6a       dashboard-metrics-scraper-6ffb444bf9-d7tbs             kubernetes-dashboard
	db21d161dfeef       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           20 seconds ago      Running             storage-provisioner         2                   369b5a02ee88e       storage-provisioner                                    kube-system
	44c36eb0d3af1       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   41 seconds ago      Running             kubernetes-dashboard        0                   c012f18fe79f1       kubernetes-dashboard-855c9754f9-cf6hc                  kubernetes-dashboard
	493310f9ab129       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           51 seconds ago      Running             coredns                     1                   26361de514979       coredns-66bc5c9577-hwczp                               kube-system
	4ca36797ef20a       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago      Running             busybox                     1                   1eaaa24814c47       busybox                                                default
	f79c465a2b069       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago      Running             kindnet-cni                 1                   a464bbcd0629a       kindnet-pt5xf                                          kube-system
	86929effcce55       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           52 seconds ago      Exited              storage-provisioner         1                   369b5a02ee88e       storage-provisioner                                    kube-system
	b1bd8af177626       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           52 seconds ago      Running             kube-proxy                  1                   7f2eb00e51d70       kube-proxy-qcgkj                                       kube-system
	4ecd5c6991209       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           58 seconds ago      Running             kube-scheduler              1                   9e6a16f4b104b       kube-scheduler-default-k8s-diff-port-204074            kube-system
	802d4fb83a2b9       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           58 seconds ago      Running             etcd                        1                   254d2a5b24857       etcd-default-k8s-diff-port-204074                      kube-system
	357c1c33e5336       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           58 seconds ago      Running             kube-apiserver              1                   8ef4cbc2ebaa5       kube-apiserver-default-k8s-diff-port-204074            kube-system
	cf19925569a9e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           58 seconds ago      Running             kube-controller-manager     1                   6b3482d453c86       kube-controller-manager-default-k8s-diff-port-204074   kube-system
	
	
	==> coredns [493310f9ab129a6e1d6281430845b4e12fbe0244899d8780b7e4d8dca312849b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47191 - 22502 "HINFO IN 8345633951952800569.1020371666462564740. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021608657s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-204074
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-204074
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=default-k8s-diff-port-204074
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_32_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:32:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-204074
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:34:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:34:09 +0000   Sat, 25 Oct 2025 10:32:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:34:09 +0000   Sat, 25 Oct 2025 10:32:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:34:09 +0000   Sat, 25 Oct 2025 10:32:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:34:09 +0000   Sat, 25 Oct 2025 10:33:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-204074
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                fedca12f-f823-4d61-b723-4e847b2985b6
	  Boot ID:                    3729fb0b-b441-4078-a4f7-ae0fb40e9fa4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-66bc5c9577-hwczp                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m14s
	  kube-system                 etcd-default-k8s-diff-port-204074                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m17s
	  kube-system                 kindnet-pt5xf                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m14s
	  kube-system                 kube-apiserver-default-k8s-diff-port-204074             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-204074    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-proxy-qcgkj                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 kube-scheduler-default-k8s-diff-port-204074             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-d7tbs              0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-cf6hc                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m12s                  kube-proxy       
	  Normal   Starting                 51s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m30s (x8 over 2m30s)  kubelet          Node default-k8s-diff-port-204074 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m30s (x8 over 2m30s)  kubelet          Node default-k8s-diff-port-204074 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m30s (x8 over 2m30s)  kubelet          Node default-k8s-diff-port-204074 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m18s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m18s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m17s                  kubelet          Node default-k8s-diff-port-204074 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m17s                  kubelet          Node default-k8s-diff-port-204074 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m17s                  kubelet          Node default-k8s-diff-port-204074 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m15s                  node-controller  Node default-k8s-diff-port-204074 event: Registered Node default-k8s-diff-port-204074 in Controller
	  Normal   NodeReady                92s                    kubelet          Node default-k8s-diff-port-204074 status is now: NodeReady
	  Normal   Starting                 59s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 59s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s (x8 over 59s)      kubelet          Node default-k8s-diff-port-204074 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s (x8 over 59s)      kubelet          Node default-k8s-diff-port-204074 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s (x8 over 59s)      kubelet          Node default-k8s-diff-port-204074 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           49s                    node-controller  Node default-k8s-diff-port-204074 event: Registered Node default-k8s-diff-port-204074 in Controller
	
	
	==> dmesg <==
	[Oct25 10:12] overlayfs: idmapped layers are currently not supported
	[Oct25 10:13] overlayfs: idmapped layers are currently not supported
	[Oct25 10:14] overlayfs: idmapped layers are currently not supported
	[Oct25 10:15] overlayfs: idmapped layers are currently not supported
	[ +16.057450] overlayfs: idmapped layers are currently not supported
	[Oct25 10:16] overlayfs: idmapped layers are currently not supported
	[ +24.917476] overlayfs: idmapped layers are currently not supported
	[Oct25 10:17] overlayfs: idmapped layers are currently not supported
	[ +27.290615] overlayfs: idmapped layers are currently not supported
	[Oct25 10:19] overlayfs: idmapped layers are currently not supported
	[Oct25 10:20] overlayfs: idmapped layers are currently not supported
	[Oct25 10:21] overlayfs: idmapped layers are currently not supported
	[Oct25 10:22] overlayfs: idmapped layers are currently not supported
	[ +31.000692] overlayfs: idmapped layers are currently not supported
	[Oct25 10:24] overlayfs: idmapped layers are currently not supported
	[Oct25 10:26] overlayfs: idmapped layers are currently not supported
	[Oct25 10:27] overlayfs: idmapped layers are currently not supported
	[Oct25 10:28] overlayfs: idmapped layers are currently not supported
	[ +15.077774] overlayfs: idmapped layers are currently not supported
	[Oct25 10:29] overlayfs: idmapped layers are currently not supported
	[Oct25 10:30] overlayfs: idmapped layers are currently not supported
	[Oct25 10:32] overlayfs: idmapped layers are currently not supported
	[ +37.840172] overlayfs: idmapped layers are currently not supported
	[Oct25 10:33] overlayfs: idmapped layers are currently not supported
	[Oct25 10:34] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [802d4fb83a2b952f13deb4266ef1896d827f97ddd11eae2520744994b5769f3e] <==
	{"level":"warn","ts":"2025-10-25T10:33:36.421574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:36.439202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:36.465929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:36.486175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:36.547444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:36.613212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:36.683671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:36.695243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:36.736647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:36.791555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:36.841432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:36.896165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:36.920374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:36.973895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:37.012417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:37.038830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:37.064507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:37.109098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:37.127544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:37.158515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:37.192534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:37.226519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:37.263025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:37.397236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53956","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T10:33:39.372536Z","caller":"traceutil/trace.go:172","msg":"trace[873151183] transaction","detail":"{read_only:false; number_of_response:0; response_revision:488; }","duration":"112.531893ms","start":"2025-10-25T10:33:39.259988Z","end":"2025-10-25T10:33:39.372520Z","steps":["trace[873151183] 'process raft request'  (duration: 112.411473ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:34:32 up  2:17,  0 user,  load average: 3.68, 3.63, 3.16
	Linux default-k8s-diff-port-204074 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f79c465a2b069953f8a630b74e3dc39ad7ac142a9b4f29869e4868e73798c34b] <==
	I1025 10:33:40.481774       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:33:40.482503       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 10:33:40.482636       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:33:40.482649       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:33:40.482663       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:33:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:33:40.705528       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:33:40.705545       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:33:40.705553       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:33:40.705817       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 10:34:10.705555       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 10:34:10.705722       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 10:34:10.705802       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 10:34:10.707061       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1025 10:34:12.105854       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:34:12.105902       1 metrics.go:72] Registering metrics
	I1025 10:34:12.105998       1 controller.go:711] "Syncing nftables rules"
	I1025 10:34:20.704694       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:34:20.704863       1 main.go:301] handling current node
	I1025 10:34:30.711089       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:34:30.711123       1 main.go:301] handling current node
	
	
	==> kube-apiserver [357c1c33e5336db1d9aacea8e98741b1db7d0a5f46bb4c275e97202edaa35037] <==
	I1025 10:33:39.258068       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 10:33:39.258102       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 10:33:39.258207       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1025 10:33:39.258253       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 10:33:39.258327       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1025 10:33:39.258405       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 10:33:39.262142       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1025 10:33:39.262633       1 aggregator.go:171] initial CRD sync complete...
	I1025 10:33:39.262654       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 10:33:39.262661       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 10:33:39.262667       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:33:39.280749       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:33:39.339915       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1025 10:33:39.399486       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 10:33:39.515687       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:33:39.658670       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:33:40.199101       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 10:33:40.381216       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:33:40.463134       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:33:40.499237       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:33:40.864628       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.5.98"}
	I1025 10:33:40.973847       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.25.171"}
	I1025 10:33:43.316271       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:33:43.666177       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:33:43.797506       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [cf19925569a9e3157327f48321ecad645bed37c06789fbc66df79fd9cf9c8310] <==
	I1025 10:33:43.212506       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 10:33:43.219649       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 10:33:43.221836       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 10:33:43.221925       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:33:43.224128       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 10:33:43.227455       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 10:33:43.230115       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 10:33:43.237442       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:33:43.240792       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:33:43.244933       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 10:33:43.247208       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 10:33:43.254615       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:33:43.254641       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 10:33:43.254648       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 10:33:43.257774       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 10:33:43.257946       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 10:33:43.258305       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 10:33:43.258554       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 10:33:43.258608       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 10:33:43.259444       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 10:33:43.259681       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1025 10:33:43.259714       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 10:33:43.259730       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 10:33:43.266405       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 10:33:43.266481       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	
	
	==> kube-proxy [b1bd8af17762678ba6f7830c709ab99400853ea3f02ac350b0be0e566844077c] <==
	I1025 10:33:41.223540       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:33:41.332900       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:33:41.441861       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:33:41.442062       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1025 10:33:41.442206       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:33:41.480903       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:33:41.481016       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:33:41.485487       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:33:41.485971       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:33:41.486183       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:33:41.487670       1 config.go:200] "Starting service config controller"
	I1025 10:33:41.488598       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:33:41.489812       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:33:41.489881       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:33:41.489921       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:33:41.489960       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:33:41.494060       1 config.go:309] "Starting node config controller"
	I1025 10:33:41.494137       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:33:41.494176       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:33:41.590409       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 10:33:41.590474       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 10:33:41.590409       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [4ecd5c6991209a440ef676eded1a237dc4635cc52d88167118cd3ff569d669ed] <==
	I1025 10:33:37.134929       1 serving.go:386] Generated self-signed cert in-memory
	I1025 10:33:41.499281       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 10:33:41.499468       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:33:41.508016       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 10:33:41.508281       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1025 10:33:41.508332       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1025 10:33:41.508388       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 10:33:41.511883       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:33:41.512192       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:33:41.512255       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:33:41.512265       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:33:41.608944       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1025 10:33:41.612307       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:33:41.613093       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:33:43 default-k8s-diff-port-204074 kubelet[775]: I1025 10:33:43.902458     775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-552g6\" (UniqueName: \"kubernetes.io/projected/63248964-f275-4a0a-af79-0a05bd9965bb-kube-api-access-552g6\") pod \"kubernetes-dashboard-855c9754f9-cf6hc\" (UID: \"63248964-f275-4a0a-af79-0a05bd9965bb\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cf6hc"
	Oct 25 10:33:43 default-k8s-diff-port-204074 kubelet[775]: I1025 10:33:43.902483     775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wd9b\" (UniqueName: \"kubernetes.io/projected/9a6f006d-b817-47bd-9c92-a78a2188f301-kube-api-access-7wd9b\") pod \"dashboard-metrics-scraper-6ffb444bf9-d7tbs\" (UID: \"9a6f006d-b817-47bd-9c92-a78a2188f301\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d7tbs"
	Oct 25 10:33:43 default-k8s-diff-port-204074 kubelet[775]: I1025 10:33:43.902507     775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/63248964-f275-4a0a-af79-0a05bd9965bb-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-cf6hc\" (UID: \"63248964-f275-4a0a-af79-0a05bd9965bb\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cf6hc"
	Oct 25 10:33:44 default-k8s-diff-port-204074 kubelet[775]: W1025 10:33:44.716587     775 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/114adef2e3f9f4d970639b5c8a68c00b64371efcafc97f0bb2c3652589f1f63a/crio-c012f18fe79f12eb75a50ca3323a3ae5c218a535da19c4c9d815533d6182726e WatchSource:0}: Error finding container c012f18fe79f12eb75a50ca3323a3ae5c218a535da19c4c9d815533d6182726e: Status 404 returned error can't find the container with id c012f18fe79f12eb75a50ca3323a3ae5c218a535da19c4c9d815533d6182726e
	Oct 25 10:33:44 default-k8s-diff-port-204074 kubelet[775]: W1025 10:33:44.734521     775 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/114adef2e3f9f4d970639b5c8a68c00b64371efcafc97f0bb2c3652589f1f63a/crio-0fa887aaa0c6a33e0eb2a41248e3ab52fdb6ca301370f4e4685bb242b4cc5865 WatchSource:0}: Error finding container 0fa887aaa0c6a33e0eb2a41248e3ab52fdb6ca301370f4e4685bb242b4cc5865: Status 404 returned error can't find the container with id 0fa887aaa0c6a33e0eb2a41248e3ab52fdb6ca301370f4e4685bb242b4cc5865
	Oct 25 10:33:45 default-k8s-diff-port-204074 kubelet[775]: I1025 10:33:45.073868     775 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 25 10:33:53 default-k8s-diff-port-204074 kubelet[775]: I1025 10:33:53.439248     775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cf6hc" podStartSLOduration=4.689306287 podStartE2EDuration="10.439227875s" podCreationTimestamp="2025-10-25 10:33:43 +0000 UTC" firstStartedPulling="2025-10-25 10:33:44.719969716 +0000 UTC m=+11.484197010" lastFinishedPulling="2025-10-25 10:33:50.469891295 +0000 UTC m=+17.234118598" observedRunningTime="2025-10-25 10:33:51.627006666 +0000 UTC m=+18.391233969" watchObservedRunningTime="2025-10-25 10:33:53.439227875 +0000 UTC m=+20.203455178"
	Oct 25 10:33:55 default-k8s-diff-port-204074 kubelet[775]: I1025 10:33:55.606201     775 scope.go:117] "RemoveContainer" containerID="8c2df29c4c7049269ca5a6916a4a4d67a6c6811911f3737d936da9da459d9e71"
	Oct 25 10:33:56 default-k8s-diff-port-204074 kubelet[775]: I1025 10:33:56.611029     775 scope.go:117] "RemoveContainer" containerID="8c2df29c4c7049269ca5a6916a4a4d67a6c6811911f3737d936da9da459d9e71"
	Oct 25 10:33:56 default-k8s-diff-port-204074 kubelet[775]: I1025 10:33:56.611677     775 scope.go:117] "RemoveContainer" containerID="fd5906f5a827a0fa41988f8c17cb5c8919d14478fb01d6184ff0560c02f0fd55"
	Oct 25 10:33:56 default-k8s-diff-port-204074 kubelet[775]: E1025 10:33:56.611928     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-d7tbs_kubernetes-dashboard(9a6f006d-b817-47bd-9c92-a78a2188f301)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d7tbs" podUID="9a6f006d-b817-47bd-9c92-a78a2188f301"
	Oct 25 10:33:57 default-k8s-diff-port-204074 kubelet[775]: I1025 10:33:57.615454     775 scope.go:117] "RemoveContainer" containerID="fd5906f5a827a0fa41988f8c17cb5c8919d14478fb01d6184ff0560c02f0fd55"
	Oct 25 10:33:57 default-k8s-diff-port-204074 kubelet[775]: E1025 10:33:57.615609     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-d7tbs_kubernetes-dashboard(9a6f006d-b817-47bd-9c92-a78a2188f301)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d7tbs" podUID="9a6f006d-b817-47bd-9c92-a78a2188f301"
	Oct 25 10:34:04 default-k8s-diff-port-204074 kubelet[775]: I1025 10:34:04.692209     775 scope.go:117] "RemoveContainer" containerID="fd5906f5a827a0fa41988f8c17cb5c8919d14478fb01d6184ff0560c02f0fd55"
	Oct 25 10:34:04 default-k8s-diff-port-204074 kubelet[775]: E1025 10:34:04.693016     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-d7tbs_kubernetes-dashboard(9a6f006d-b817-47bd-9c92-a78a2188f301)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d7tbs" podUID="9a6f006d-b817-47bd-9c92-a78a2188f301"
	Oct 25 10:34:11 default-k8s-diff-port-204074 kubelet[775]: I1025 10:34:11.652112     775 scope.go:117] "RemoveContainer" containerID="86929effcce559f05f40fcb87fb161dcf0cc18a45ed38b458b52dd1a4a6ce189"
	Oct 25 10:34:19 default-k8s-diff-port-204074 kubelet[775]: I1025 10:34:19.386143     775 scope.go:117] "RemoveContainer" containerID="fd5906f5a827a0fa41988f8c17cb5c8919d14478fb01d6184ff0560c02f0fd55"
	Oct 25 10:34:19 default-k8s-diff-port-204074 kubelet[775]: I1025 10:34:19.675990     775 scope.go:117] "RemoveContainer" containerID="fd5906f5a827a0fa41988f8c17cb5c8919d14478fb01d6184ff0560c02f0fd55"
	Oct 25 10:34:19 default-k8s-diff-port-204074 kubelet[775]: I1025 10:34:19.676299     775 scope.go:117] "RemoveContainer" containerID="0203f322602e7be00d05eff27792fd64d0786bf638a9f1139e07118ba2b1b674"
	Oct 25 10:34:19 default-k8s-diff-port-204074 kubelet[775]: E1025 10:34:19.676451     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-d7tbs_kubernetes-dashboard(9a6f006d-b817-47bd-9c92-a78a2188f301)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d7tbs" podUID="9a6f006d-b817-47bd-9c92-a78a2188f301"
	Oct 25 10:34:24 default-k8s-diff-port-204074 kubelet[775]: I1025 10:34:24.692850     775 scope.go:117] "RemoveContainer" containerID="0203f322602e7be00d05eff27792fd64d0786bf638a9f1139e07118ba2b1b674"
	Oct 25 10:34:24 default-k8s-diff-port-204074 kubelet[775]: E1025 10:34:24.693468     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-d7tbs_kubernetes-dashboard(9a6f006d-b817-47bd-9c92-a78a2188f301)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d7tbs" podUID="9a6f006d-b817-47bd-9c92-a78a2188f301"
	Oct 25 10:34:29 default-k8s-diff-port-204074 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 10:34:29 default-k8s-diff-port-204074 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 10:34:29 default-k8s-diff-port-204074 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [44c36eb0d3af1c70f460f7ff95b889938f58449578c41fdc1a6f5a428c39018d] <==
	2025/10/25 10:33:50 Using namespace: kubernetes-dashboard
	2025/10/25 10:33:50 Using in-cluster config to connect to apiserver
	2025/10/25 10:33:50 Using secret token for csrf signing
	2025/10/25 10:33:50 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 10:33:50 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 10:33:50 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 10:33:50 Generating JWE encryption key
	2025/10/25 10:33:50 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 10:33:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 10:33:51 Initializing JWE encryption key from synchronized object
	2025/10/25 10:33:51 Creating in-cluster Sidecar client
	2025/10/25 10:33:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:33:51 Serving insecurely on HTTP port: 9090
	2025/10/25 10:34:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:33:50 Starting overwatch
	
	
	==> storage-provisioner [86929effcce559f05f40fcb87fb161dcf0cc18a45ed38b458b52dd1a4a6ce189] <==
	I1025 10:33:40.978286       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 10:34:10.990399       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [db21d161dfeef87ab0f7be598156f8aef8912dc979c9d322d68c986b0d00d2c6] <==
	I1025 10:34:11.730299       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 10:34:11.771824       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 10:34:11.772267       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 10:34:11.775527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:34:15.234567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:34:19.494824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:34:23.093568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:34:26.147476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:34:29.176598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:34:29.193509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 10:34:29.193660       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 10:34:29.193838       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-204074_8d434304-0fdb-4068-a42e-d2b7c2da6dca!
	I1025 10:34:29.193885       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0b6162f7-ef21-4da6-838b-9cd22ec3453b", APIVersion:"v1", ResourceVersion:"679", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-204074_8d434304-0fdb-4068-a42e-d2b7c2da6dca became leader
	W1025 10:34:29.210937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:34:29.233861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 10:34:29.294363       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-204074_8d434304-0fdb-4068-a42e-d2b7c2da6dca!
	W1025 10:34:31.245520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:34:31.250838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-204074 -n default-k8s-diff-port-204074
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-204074 -n default-k8s-diff-port-204074: exit status 2 (478.729912ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-204074 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-204074
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-204074:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "114adef2e3f9f4d970639b5c8a68c00b64371efcafc97f0bb2c3652589f1f63a",
	        "Created": "2025-10-25T10:31:40.749344043Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 485613,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:33:26.781658202Z",
	            "FinishedAt": "2025-10-25T10:33:25.93631735Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/114adef2e3f9f4d970639b5c8a68c00b64371efcafc97f0bb2c3652589f1f63a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/114adef2e3f9f4d970639b5c8a68c00b64371efcafc97f0bb2c3652589f1f63a/hostname",
	        "HostsPath": "/var/lib/docker/containers/114adef2e3f9f4d970639b5c8a68c00b64371efcafc97f0bb2c3652589f1f63a/hosts",
	        "LogPath": "/var/lib/docker/containers/114adef2e3f9f4d970639b5c8a68c00b64371efcafc97f0bb2c3652589f1f63a/114adef2e3f9f4d970639b5c8a68c00b64371efcafc97f0bb2c3652589f1f63a-json.log",
	        "Name": "/default-k8s-diff-port-204074",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-204074:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-204074",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "114adef2e3f9f4d970639b5c8a68c00b64371efcafc97f0bb2c3652589f1f63a",
	                "LowerDir": "/var/lib/docker/overlay2/e85a7252b6645d35aa69af6f246969581b36f8fafff02d78082a3757e2e49a13-init/diff:/var/lib/docker/overlay2/56244b4f60319ba406f5ebe406da3f45bd5764c167111669ae2d84794cf94518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e85a7252b6645d35aa69af6f246969581b36f8fafff02d78082a3757e2e49a13/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e85a7252b6645d35aa69af6f246969581b36f8fafff02d78082a3757e2e49a13/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e85a7252b6645d35aa69af6f246969581b36f8fafff02d78082a3757e2e49a13/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-204074",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-204074/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-204074",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-204074",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-204074",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "57e3829456cf5e3a9fee38866f42b16ee866689a1529df04fb657c25fb826087",
	            "SandboxKey": "/var/run/docker/netns/57e3829456cf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-204074": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:78:66:b3:5d:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a8d6d82e4f1c3e18dd593c28bd34ec865e52f7ca53dce62df012fba5b98ee7a9",
	                    "EndpointID": "6aad9dbd23caa265551ed329573b22679945a6df1dc8f7435be97969324b4e8e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-204074",
	                        "114adef2e3f9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-204074 -n default-k8s-diff-port-204074
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-204074 -n default-k8s-diff-port-204074: exit status 2 (529.679186ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-204074 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-204074 logs -n 25: (1.392103115s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cert-options-506318 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-506318          │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:28 UTC │
	│ delete  │ -p cert-options-506318                                                                                                                                                                                                                        │ cert-options-506318          │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:28 UTC │
	│ start   │ -p old-k8s-version-610853 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:28 UTC │ 25 Oct 25 10:29 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-610853 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:30 UTC │                     │
	│ stop    │ -p old-k8s-version-610853 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:30 UTC │ 25 Oct 25 10:30 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-610853 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:30 UTC │ 25 Oct 25 10:30 UTC │
	│ start   │ -p old-k8s-version-610853 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:30 UTC │ 25 Oct 25 10:31 UTC │
	│ image   │ old-k8s-version-610853 image list --format=json                                                                                                                                                                                               │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │ 25 Oct 25 10:31 UTC │
	│ pause   │ -p old-k8s-version-610853 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │                     │
	│ delete  │ -p old-k8s-version-610853                                                                                                                                                                                                                     │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │ 25 Oct 25 10:31 UTC │
	│ delete  │ -p old-k8s-version-610853                                                                                                                                                                                                                     │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │ 25 Oct 25 10:31 UTC │
	│ start   │ -p default-k8s-diff-port-204074 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │ 25 Oct 25 10:33 UTC │
	│ start   │ -p cert-expiration-313068 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-313068       │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │ 25 Oct 25 10:32 UTC │
	│ delete  │ -p cert-expiration-313068                                                                                                                                                                                                                     │ cert-expiration-313068       │ jenkins │ v1.37.0 │ 25 Oct 25 10:32 UTC │ 25 Oct 25 10:32 UTC │
	│ start   │ -p embed-certs-419185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:32 UTC │ 25 Oct 25 10:33 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-204074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-204074 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │ 25 Oct 25 10:33 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-204074 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │ 25 Oct 25 10:33 UTC │
	│ start   │ -p default-k8s-diff-port-204074 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │ 25 Oct 25 10:34 UTC │
	│ addons  │ enable metrics-server -p embed-certs-419185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │                     │
	│ stop    │ -p embed-certs-419185 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │ 25 Oct 25 10:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-419185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ start   │ -p embed-certs-419185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │                     │
	│ image   │ default-k8s-diff-port-204074 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ pause   │ -p default-k8s-diff-port-204074 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:34:04
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:34:04.047712  488429 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:34:04.047855  488429 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:34:04.047867  488429 out.go:374] Setting ErrFile to fd 2...
	I1025 10:34:04.047886  488429 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:34:04.048179  488429 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 10:34:04.048587  488429 out.go:368] Setting JSON to false
	I1025 10:34:04.049585  488429 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8194,"bootTime":1761380250,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1025 10:34:04.049652  488429 start.go:141] virtualization:  
	I1025 10:34:04.052688  488429 out.go:179] * [embed-certs-419185] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:34:04.056601  488429 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 10:34:04.056655  488429 notify.go:220] Checking for updates...
	I1025 10:34:04.062419  488429 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:34:04.065292  488429 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:34:04.068299  488429 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-292167/.minikube
	I1025 10:34:04.071447  488429 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:34:04.074328  488429 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:34:04.077589  488429 config.go:182] Loaded profile config "embed-certs-419185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:34:04.078189  488429 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:34:04.108883  488429 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:34:04.109009  488429 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:34:04.163951  488429 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:34:04.154845421 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:34:04.164075  488429 docker.go:318] overlay module found
	I1025 10:34:04.167281  488429 out.go:179] * Using the docker driver based on existing profile
	I1025 10:34:04.170109  488429 start.go:305] selected driver: docker
	I1025 10:34:04.170127  488429 start.go:925] validating driver "docker" against &{Name:embed-certs-419185 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-419185 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:34:04.170225  488429 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:34:04.171067  488429 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:34:04.232270  488429 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:34:04.223394737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:34:04.232658  488429 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:34:04.232689  488429 cni.go:84] Creating CNI manager for ""
	I1025 10:34:04.232753  488429 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:34:04.232794  488429 start.go:349] cluster config:
	{Name:embed-certs-419185 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-419185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:34:04.235860  488429 out.go:179] * Starting "embed-certs-419185" primary control-plane node in "embed-certs-419185" cluster
	I1025 10:34:04.238753  488429 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:34:04.241614  488429 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:34:04.244492  488429 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:34:04.244622  488429 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:34:04.244655  488429 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 10:34:04.244668  488429 cache.go:58] Caching tarball of preloaded images
	I1025 10:34:04.244738  488429 preload.go:233] Found /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:34:04.244753  488429 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:34:04.244859  488429 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/config.json ...
	I1025 10:34:04.265772  488429 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:34:04.265798  488429 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:34:04.265812  488429 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:34:04.265835  488429 start.go:360] acquireMachinesLock for embed-certs-419185: {Name:mk5a130bf45ea43a164134eaf1f0ed9a364dff5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:34:04.265900  488429 start.go:364] duration metric: took 35.93µs to acquireMachinesLock for "embed-certs-419185"
	I1025 10:34:04.265934  488429 start.go:96] Skipping create...Using existing machine configuration
	I1025 10:34:04.265943  488429 fix.go:54] fixHost starting: 
	I1025 10:34:04.266196  488429 cli_runner.go:164] Run: docker container inspect embed-certs-419185 --format={{.State.Status}}
	I1025 10:34:04.283235  488429 fix.go:112] recreateIfNeeded on embed-certs-419185: state=Stopped err=<nil>
	W1025 10:34:04.283275  488429 fix.go:138] unexpected machine state, will restart: <nil>
	W1025 10:34:02.703583  485483 pod_ready.go:104] pod "coredns-66bc5c9577-hwczp" is not "Ready", error: <nil>
	W1025 10:34:04.707014  485483 pod_ready.go:104] pod "coredns-66bc5c9577-hwczp" is not "Ready", error: <nil>
	I1025 10:34:04.286572  488429 out.go:252] * Restarting existing docker container for "embed-certs-419185" ...
	I1025 10:34:04.286662  488429 cli_runner.go:164] Run: docker start embed-certs-419185
	I1025 10:34:04.552453  488429 cli_runner.go:164] Run: docker container inspect embed-certs-419185 --format={{.State.Status}}
	I1025 10:34:04.572001  488429 kic.go:430] container "embed-certs-419185" state is running.
	I1025 10:34:04.572411  488429 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-419185
	I1025 10:34:04.596934  488429 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/config.json ...
	I1025 10:34:04.597175  488429 machine.go:93] provisionDockerMachine start ...
	I1025 10:34:04.597236  488429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-419185
	I1025 10:34:04.620779  488429 main.go:141] libmachine: Using SSH client type: native
	I1025 10:34:04.621102  488429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33447 <nil> <nil>}
	I1025 10:34:04.621112  488429 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:34:04.623726  488429 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43066->127.0.0.1:33447: read: connection reset by peer
	I1025 10:34:07.774855  488429 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-419185
	
	I1025 10:34:07.774885  488429 ubuntu.go:182] provisioning hostname "embed-certs-419185"
	I1025 10:34:07.774954  488429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-419185
	I1025 10:34:07.791944  488429 main.go:141] libmachine: Using SSH client type: native
	I1025 10:34:07.792254  488429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33447 <nil> <nil>}
	I1025 10:34:07.792271  488429 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-419185 && echo "embed-certs-419185" | sudo tee /etc/hostname
	I1025 10:34:07.954192  488429 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-419185
	
	I1025 10:34:07.954312  488429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-419185
	I1025 10:34:07.971767  488429 main.go:141] libmachine: Using SSH client type: native
	I1025 10:34:07.972103  488429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33447 <nil> <nil>}
	I1025 10:34:07.972127  488429 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-419185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-419185/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-419185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:34:08.123514  488429 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:34:08.123580  488429 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-292167/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-292167/.minikube}
	I1025 10:34:08.123634  488429 ubuntu.go:190] setting up certificates
	I1025 10:34:08.123663  488429 provision.go:84] configureAuth start
	I1025 10:34:08.123742  488429 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-419185
	I1025 10:34:08.140327  488429 provision.go:143] copyHostCerts
	I1025 10:34:08.140403  488429 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem, removing ...
	I1025 10:34:08.140426  488429 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem
	I1025 10:34:08.140520  488429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem (1082 bytes)
	I1025 10:34:08.140641  488429 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem, removing ...
	I1025 10:34:08.140652  488429 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem
	I1025 10:34:08.140689  488429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem (1123 bytes)
	I1025 10:34:08.140757  488429 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem, removing ...
	I1025 10:34:08.140766  488429 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem
	I1025 10:34:08.140790  488429 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem (1675 bytes)
	I1025 10:34:08.140849  488429 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem org=jenkins.embed-certs-419185 san=[127.0.0.1 192.168.76.2 embed-certs-419185 localhost minikube]
	I1025 10:34:08.561140  488429 provision.go:177] copyRemoteCerts
	I1025 10:34:08.561215  488429 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:34:08.561261  488429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-419185
	I1025 10:34:08.578886  488429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33447 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/embed-certs-419185/id_rsa Username:docker}
	I1025 10:34:08.683009  488429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 10:34:08.704520  488429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 10:34:08.724093  488429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 10:34:08.741789  488429 provision.go:87] duration metric: took 618.096172ms to configureAuth
	I1025 10:34:08.741858  488429 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:34:08.742057  488429 config.go:182] Loaded profile config "embed-certs-419185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:34:08.742163  488429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-419185
	I1025 10:34:08.764004  488429 main.go:141] libmachine: Using SSH client type: native
	I1025 10:34:08.764342  488429 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33447 <nil> <nil>}
	I1025 10:34:08.764364  488429 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:34:09.101488  488429 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:34:09.101508  488429 machine.go:96] duration metric: took 4.504323332s to provisionDockerMachine
	I1025 10:34:09.101519  488429 start.go:293] postStartSetup for "embed-certs-419185" (driver="docker")
	I1025 10:34:09.101530  488429 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:34:09.101602  488429 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:34:09.101645  488429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-419185
	I1025 10:34:09.123024  488429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33447 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/embed-certs-419185/id_rsa Username:docker}
	I1025 10:34:09.227577  488429 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:34:09.231126  488429 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:34:09.231178  488429 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:34:09.231192  488429 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/addons for local assets ...
	I1025 10:34:09.231244  488429 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/files for local assets ...
	I1025 10:34:09.231332  488429 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem -> 2940172.pem in /etc/ssl/certs
	I1025 10:34:09.231444  488429 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:34:09.239290  488429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 10:34:09.257811  488429 start.go:296] duration metric: took 156.276876ms for postStartSetup
	I1025 10:34:09.257904  488429 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:34:09.257956  488429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-419185
	I1025 10:34:09.275779  488429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33447 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/embed-certs-419185/id_rsa Username:docker}
	I1025 10:34:09.376359  488429 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:34:09.381628  488429 fix.go:56] duration metric: took 5.115676511s for fixHost
	I1025 10:34:09.381656  488429 start.go:83] releasing machines lock for "embed-certs-419185", held for 5.115738797s
	I1025 10:34:09.381748  488429 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-419185
	I1025 10:34:09.400392  488429 ssh_runner.go:195] Run: cat /version.json
	I1025 10:34:09.400445  488429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-419185
	I1025 10:34:09.400461  488429 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:34:09.400515  488429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-419185
	I1025 10:34:09.420725  488429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33447 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/embed-certs-419185/id_rsa Username:docker}
	I1025 10:34:09.437148  488429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33447 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/embed-certs-419185/id_rsa Username:docker}
	I1025 10:34:09.634831  488429 ssh_runner.go:195] Run: systemctl --version
	I1025 10:34:09.641448  488429 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:34:09.680722  488429 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:34:09.685185  488429 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:34:09.685254  488429 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:34:09.693834  488429 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:34:09.693859  488429 start.go:495] detecting cgroup driver to use...
	I1025 10:34:09.693891  488429 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:34:09.693939  488429 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:34:09.709943  488429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:34:09.723300  488429 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:34:09.723365  488429 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:34:09.739348  488429 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:34:09.752673  488429 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:34:09.887991  488429 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:34:10.063391  488429 docker.go:234] disabling docker service ...
	I1025 10:34:10.063459  488429 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:34:10.081621  488429 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:34:10.096682  488429 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:34:10.231667  488429 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:34:10.367123  488429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:34:10.381253  488429 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:34:10.399740  488429 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:34:10.399807  488429 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:34:10.410325  488429 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:34:10.410392  488429 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:34:10.420737  488429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:34:10.430394  488429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:34:10.440193  488429 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:34:10.448982  488429 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:34:10.458278  488429 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:34:10.466914  488429 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:34:10.476009  488429 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:34:10.483602  488429 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:34:10.490997  488429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:34:10.619977  488429 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:34:10.751837  488429 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:34:10.751902  488429 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:34:10.755866  488429 start.go:563] Will wait 60s for crictl version
	I1025 10:34:10.755931  488429 ssh_runner.go:195] Run: which crictl
	I1025 10:34:10.759741  488429 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:34:10.789385  488429 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:34:10.789495  488429 ssh_runner.go:195] Run: crio --version
	I1025 10:34:10.818378  488429 ssh_runner.go:195] Run: crio --version
	I1025 10:34:10.849601  488429 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1025 10:34:07.203323  485483 pod_ready.go:104] pod "coredns-66bc5c9577-hwczp" is not "Ready", error: <nil>
	W1025 10:34:09.203710  485483 pod_ready.go:104] pod "coredns-66bc5c9577-hwczp" is not "Ready", error: <nil>
	W1025 10:34:11.203824  485483 pod_ready.go:104] pod "coredns-66bc5c9577-hwczp" is not "Ready", error: <nil>
	I1025 10:34:10.852478  488429 cli_runner.go:164] Run: docker network inspect embed-certs-419185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:34:10.873870  488429 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1025 10:34:10.877625  488429 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:34:10.887511  488429 kubeadm.go:883] updating cluster {Name:embed-certs-419185 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-419185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:34:10.887636  488429 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:34:10.887702  488429 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:34:10.921277  488429 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:34:10.921302  488429 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:34:10.921358  488429 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:34:10.956561  488429 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:34:10.956589  488429 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:34:10.956597  488429 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1025 10:34:10.956696  488429 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-419185 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-419185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:34:10.956783  488429 ssh_runner.go:195] Run: crio config
	I1025 10:34:11.037921  488429 cni.go:84] Creating CNI manager for ""
	I1025 10:34:11.037949  488429 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:34:11.037975  488429 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:34:11.037999  488429 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-419185 NodeName:embed-certs-419185 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:34:11.038137  488429 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-419185"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:34:11.038220  488429 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:34:11.046902  488429 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:34:11.046976  488429 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:34:11.054877  488429 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1025 10:34:11.068615  488429 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:34:11.082345  488429 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1025 10:34:11.096566  488429 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:34:11.100575  488429 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:34:11.115200  488429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:34:11.238879  488429 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:34:11.256295  488429 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185 for IP: 192.168.76.2
	I1025 10:34:11.256314  488429 certs.go:195] generating shared ca certs ...
	I1025 10:34:11.256330  488429 certs.go:227] acquiring lock for ca certs: {Name:mk9438893ca511bbb3aa6154ee7b6f94b409696d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:34:11.256475  488429 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key
	I1025 10:34:11.256524  488429 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key
	I1025 10:34:11.256536  488429 certs.go:257] generating profile certs ...
	I1025 10:34:11.256623  488429 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/client.key
	I1025 10:34:11.256687  488429 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/apiserver.key.627d90fe
	I1025 10:34:11.256738  488429 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/proxy-client.key
	I1025 10:34:11.256846  488429 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem (1338 bytes)
	W1025 10:34:11.256884  488429 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017_empty.pem, impossibly tiny 0 bytes
	I1025 10:34:11.256900  488429 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 10:34:11.256928  488429 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem (1082 bytes)
	I1025 10:34:11.256958  488429 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:34:11.256990  488429 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem (1675 bytes)
	I1025 10:34:11.257040  488429 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 10:34:11.257662  488429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:34:11.278358  488429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:34:11.296610  488429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:34:11.326873  488429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:34:11.355114  488429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1025 10:34:11.378410  488429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:34:11.412439  488429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:34:11.444134  488429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/embed-certs-419185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 10:34:11.464629  488429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /usr/share/ca-certificates/2940172.pem (1708 bytes)
	I1025 10:34:11.486908  488429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:34:11.514337  488429 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem --> /usr/share/ca-certificates/294017.pem (1338 bytes)
	I1025 10:34:11.534318  488429 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:34:11.552059  488429 ssh_runner.go:195] Run: openssl version
	I1025 10:34:11.561095  488429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940172.pem && ln -fs /usr/share/ca-certificates/2940172.pem /etc/ssl/certs/2940172.pem"
	I1025 10:34:11.570646  488429 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940172.pem
	I1025 10:34:11.574521  488429 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:39 /usr/share/ca-certificates/2940172.pem
	I1025 10:34:11.574586  488429 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940172.pem
	I1025 10:34:11.626096  488429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940172.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:34:11.634709  488429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:34:11.643399  488429 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:34:11.647326  488429 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:34:11.647410  488429 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:34:11.702202  488429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:34:11.714977  488429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294017.pem && ln -fs /usr/share/ca-certificates/294017.pem /etc/ssl/certs/294017.pem"
	I1025 10:34:11.728693  488429 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294017.pem
	I1025 10:34:11.732801  488429 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:39 /usr/share/ca-certificates/294017.pem
	I1025 10:34:11.732871  488429 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294017.pem
	I1025 10:34:11.781021  488429 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294017.pem /etc/ssl/certs/51391683.0"
	I1025 10:34:11.790842  488429 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:34:11.795326  488429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:34:11.839731  488429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:34:11.882576  488429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:34:11.934006  488429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:34:11.989758  488429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:34:12.038531  488429 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:34:12.097773  488429 kubeadm.go:400] StartCluster: {Name:embed-certs-419185 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-419185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:34:12.097934  488429 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:34:12.098034  488429 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:34:12.144956  488429 cri.go:89] found id: "5d13bdf1233c74d26d6840451eeed0128e78110075227087c41d9d2ef0a3b0c1"
	I1025 10:34:12.145020  488429 cri.go:89] found id: "e175f67ced2debe1beebce72628c6856d49efdd71fbee71a4a521e5cb4728c33"
	I1025 10:34:12.145039  488429 cri.go:89] found id: ""
	I1025 10:34:12.145137  488429 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 10:34:12.159686  488429 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:34:12Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:34:12.159860  488429 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:34:12.183632  488429 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:34:12.183695  488429 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:34:12.183784  488429 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:34:12.207959  488429 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:34:12.208639  488429 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-419185" does not appear in /home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:34:12.208951  488429 kubeconfig.go:62] /home/jenkins/minikube-integration/21794-292167/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-419185" cluster setting kubeconfig missing "embed-certs-419185" context setting]
	I1025 10:34:12.209439  488429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/kubeconfig: {Name:mk3dba620aff9c31a3afd43d9db4d9ff5be75367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:34:12.211180  488429 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:34:12.232572  488429 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1025 10:34:12.232649  488429 kubeadm.go:601] duration metric: took 48.933333ms to restartPrimaryControlPlane
	I1025 10:34:12.232674  488429 kubeadm.go:402] duration metric: took 134.912586ms to StartCluster
	I1025 10:34:12.232716  488429 settings.go:142] acquiring lock: {Name:mkea67a33832f2ea491a4c745ccd836174c61655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:34:12.232793  488429 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:34:12.234096  488429 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/kubeconfig: {Name:mk3dba620aff9c31a3afd43d9db4d9ff5be75367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:34:12.234385  488429 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:34:12.234881  488429 config.go:182] Loaded profile config "embed-certs-419185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:34:12.234836  488429 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:34:12.235026  488429 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-419185"
	I1025 10:34:12.235053  488429 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-419185"
	I1025 10:34:12.235086  488429 addons.go:69] Setting default-storageclass=true in profile "embed-certs-419185"
	I1025 10:34:12.235127  488429 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-419185"
	W1025 10:34:12.235093  488429 addons.go:247] addon storage-provisioner should already be in state true
	I1025 10:34:12.235264  488429 host.go:66] Checking if "embed-certs-419185" exists ...
	I1025 10:34:12.235574  488429 cli_runner.go:164] Run: docker container inspect embed-certs-419185 --format={{.State.Status}}
	I1025 10:34:12.235785  488429 cli_runner.go:164] Run: docker container inspect embed-certs-419185 --format={{.State.Status}}
	I1025 10:34:12.235055  488429 addons.go:69] Setting dashboard=true in profile "embed-certs-419185"
	I1025 10:34:12.236131  488429 addons.go:238] Setting addon dashboard=true in "embed-certs-419185"
	W1025 10:34:12.236140  488429 addons.go:247] addon dashboard should already be in state true
	I1025 10:34:12.236171  488429 host.go:66] Checking if "embed-certs-419185" exists ...
	I1025 10:34:12.236578  488429 cli_runner.go:164] Run: docker container inspect embed-certs-419185 --format={{.State.Status}}
	I1025 10:34:12.247203  488429 out.go:179] * Verifying Kubernetes components...
	I1025 10:34:12.250628  488429 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:34:12.285395  488429 addons.go:238] Setting addon default-storageclass=true in "embed-certs-419185"
	W1025 10:34:12.285422  488429 addons.go:247] addon default-storageclass should already be in state true
	I1025 10:34:12.285447  488429 host.go:66] Checking if "embed-certs-419185" exists ...
	I1025 10:34:12.289209  488429 cli_runner.go:164] Run: docker container inspect embed-certs-419185 --format={{.State.Status}}
	I1025 10:34:12.296302  488429 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:34:12.300339  488429 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:34:12.300365  488429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:34:12.300450  488429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-419185
	I1025 10:34:12.315225  488429 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 10:34:12.319961  488429 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1025 10:34:12.322963  488429 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 10:34:12.322992  488429 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 10:34:12.323067  488429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-419185
	I1025 10:34:12.348544  488429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33447 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/embed-certs-419185/id_rsa Username:docker}
	I1025 10:34:12.350545  488429 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:34:12.350564  488429 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:34:12.350626  488429 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-419185
	I1025 10:34:12.387704  488429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33447 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/embed-certs-419185/id_rsa Username:docker}
	I1025 10:34:12.403297  488429 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33447 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/embed-certs-419185/id_rsa Username:docker}
	I1025 10:34:12.586037  488429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:34:12.648706  488429 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:34:12.712978  488429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:34:12.721654  488429 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 10:34:12.721679  488429 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 10:34:12.801231  488429 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 10:34:12.801312  488429 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 10:34:12.906177  488429 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 10:34:12.906202  488429 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 10:34:12.932006  488429 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 10:34:12.932034  488429 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 10:34:12.953676  488429 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 10:34:12.953704  488429 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 10:34:12.978594  488429 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 10:34:12.978622  488429 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 10:34:13.001840  488429 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 10:34:13.001872  488429 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 10:34:13.025803  488429 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 10:34:13.025842  488429 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 10:34:13.042696  488429 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:34:13.042726  488429 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 10:34:13.062114  488429 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1025 10:34:13.704217  485483 pod_ready.go:104] pod "coredns-66bc5c9577-hwczp" is not "Ready", error: <nil>
	I1025 10:34:15.203751  485483 pod_ready.go:94] pod "coredns-66bc5c9577-hwczp" is "Ready"
	I1025 10:34:15.203838  485483 pod_ready.go:86] duration metric: took 33.507226684s for pod "coredns-66bc5c9577-hwczp" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:15.210682  485483 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-204074" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:15.216305  485483 pod_ready.go:94] pod "etcd-default-k8s-diff-port-204074" is "Ready"
	I1025 10:34:15.216379  485483 pod_ready.go:86] duration metric: took 5.621001ms for pod "etcd-default-k8s-diff-port-204074" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:15.220897  485483 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-204074" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:15.226242  485483 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-204074" is "Ready"
	I1025 10:34:15.226318  485483 pod_ready.go:86] duration metric: took 5.3482ms for pod "kube-apiserver-default-k8s-diff-port-204074" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:15.229253  485483 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-204074" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:15.400249  485483 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-204074" is "Ready"
	I1025 10:34:15.400326  485483 pod_ready.go:86] duration metric: took 171.010844ms for pod "kube-controller-manager-default-k8s-diff-port-204074" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:15.600771  485483 pod_ready.go:83] waiting for pod "kube-proxy-qcgkj" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:16.000603  485483 pod_ready.go:94] pod "kube-proxy-qcgkj" is "Ready"
	I1025 10:34:16.000685  485483 pod_ready.go:86] duration metric: took 399.833798ms for pod "kube-proxy-qcgkj" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:16.200009  485483 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-204074" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:16.600184  485483 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-204074" is "Ready"
	I1025 10:34:16.600262  485483 pod_ready.go:86] duration metric: took 400.173139ms for pod "kube-scheduler-default-k8s-diff-port-204074" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:16.600291  485483 pod_ready.go:40] duration metric: took 34.90788081s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:34:16.704132  485483 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 10:34:16.707499  485483 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-204074" cluster and "default" namespace by default
	I1025 10:34:19.595027  488429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.008901879s)
	I1025 10:34:19.595089  488429 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.946312072s)
	I1025 10:34:19.595120  488429 node_ready.go:35] waiting up to 6m0s for node "embed-certs-419185" to be "Ready" ...
	I1025 10:34:19.595493  488429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.882492148s)
	I1025 10:34:19.643535  488429 node_ready.go:49] node "embed-certs-419185" is "Ready"
	I1025 10:34:19.643571  488429 node_ready.go:38] duration metric: took 48.432858ms for node "embed-certs-419185" to be "Ready" ...
	I1025 10:34:19.643584  488429 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:34:19.643655  488429 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:34:19.704585  488429 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.64242637s)
	I1025 10:34:19.704755  488429 api_server.go:72] duration metric: took 7.470312236s to wait for apiserver process to appear ...
	I1025 10:34:19.704768  488429 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:34:19.704785  488429 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:34:19.709374  488429 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-419185 addons enable metrics-server
	
	I1025 10:34:19.712883  488429 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1025 10:34:19.716914  488429 addons.go:514] duration metric: took 7.482079988s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1025 10:34:19.726700  488429 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1025 10:34:19.726724  488429 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1025 10:34:20.205286  488429 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:34:20.214729  488429 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1025 10:34:20.215932  488429 api_server.go:141] control plane version: v1.34.1
	I1025 10:34:20.215963  488429 api_server.go:131] duration metric: took 511.188742ms to wait for apiserver health ...
	I1025 10:34:20.215974  488429 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:34:20.219625  488429 system_pods.go:59] 8 kube-system pods found
	I1025 10:34:20.219666  488429 system_pods.go:61] "coredns-66bc5c9577-q85rh" [e4d97f26-45e9-46af-a009-111b0a00784f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:34:20.219676  488429 system_pods.go:61] "etcd-embed-certs-419185" [e9e7d435-1b4b-478b-947e-0e301e8167af] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:34:20.219682  488429 system_pods.go:61] "kindnet-4ncnd" [1b443cbc-f209-4f7f-af12-0461716bb2d0] Running
	I1025 10:34:20.219689  488429 system_pods.go:61] "kube-apiserver-embed-certs-419185" [e60429f2-4545-479d-83f0-58c50867e833] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:34:20.219695  488429 system_pods.go:61] "kube-controller-manager-embed-certs-419185" [8822bf04-b788-4fa7-824c-6566f079e081] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:34:20.219701  488429 system_pods.go:61] "kube-proxy-2vqfc" [9b8b587a-0b0d-4176-a9b1-167e6fc8b1e7] Running
	I1025 10:34:20.219707  488429 system_pods.go:61] "kube-scheduler-embed-certs-419185" [2c573bf0-c9a1-4158-91e8-683bc73f60d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:34:20.219712  488429 system_pods.go:61] "storage-provisioner" [662f0cd5-ae79-463a-8a7a-f84ef27d6fee] Running
	I1025 10:34:20.219718  488429 system_pods.go:74] duration metric: took 3.737834ms to wait for pod list to return data ...
	I1025 10:34:20.219732  488429 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:34:20.222766  488429 default_sa.go:45] found service account: "default"
	I1025 10:34:20.222792  488429 default_sa.go:55] duration metric: took 3.053655ms for default service account to be created ...
	I1025 10:34:20.222802  488429 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:34:20.226385  488429 system_pods.go:86] 8 kube-system pods found
	I1025 10:34:20.226421  488429 system_pods.go:89] "coredns-66bc5c9577-q85rh" [e4d97f26-45e9-46af-a009-111b0a00784f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:34:20.226436  488429 system_pods.go:89] "etcd-embed-certs-419185" [e9e7d435-1b4b-478b-947e-0e301e8167af] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:34:20.226442  488429 system_pods.go:89] "kindnet-4ncnd" [1b443cbc-f209-4f7f-af12-0461716bb2d0] Running
	I1025 10:34:20.226457  488429 system_pods.go:89] "kube-apiserver-embed-certs-419185" [e60429f2-4545-479d-83f0-58c50867e833] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:34:20.226465  488429 system_pods.go:89] "kube-controller-manager-embed-certs-419185" [8822bf04-b788-4fa7-824c-6566f079e081] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:34:20.226474  488429 system_pods.go:89] "kube-proxy-2vqfc" [9b8b587a-0b0d-4176-a9b1-167e6fc8b1e7] Running
	I1025 10:34:20.226481  488429 system_pods.go:89] "kube-scheduler-embed-certs-419185" [2c573bf0-c9a1-4158-91e8-683bc73f60d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:34:20.226491  488429 system_pods.go:89] "storage-provisioner" [662f0cd5-ae79-463a-8a7a-f84ef27d6fee] Running
	I1025 10:34:20.226498  488429 system_pods.go:126] duration metric: took 3.690901ms to wait for k8s-apps to be running ...
	I1025 10:34:20.226511  488429 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:34:20.226571  488429 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:34:20.241899  488429 system_svc.go:56] duration metric: took 15.378152ms WaitForService to wait for kubelet
	I1025 10:34:20.241930  488429 kubeadm.go:586] duration metric: took 8.00749383s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:34:20.241950  488429 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:34:20.244977  488429 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:34:20.245010  488429 node_conditions.go:123] node cpu capacity is 2
	I1025 10:34:20.245023  488429 node_conditions.go:105] duration metric: took 3.06826ms to run NodePressure ...
	I1025 10:34:20.245035  488429 start.go:241] waiting for startup goroutines ...
	I1025 10:34:20.245042  488429 start.go:246] waiting for cluster config update ...
	I1025 10:34:20.245053  488429 start.go:255] writing updated cluster config ...
	I1025 10:34:20.245349  488429 ssh_runner.go:195] Run: rm -f paused
	I1025 10:34:20.249463  488429 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:34:20.256066  488429 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-q85rh" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 10:34:22.262335  488429 pod_ready.go:104] pod "coredns-66bc5c9577-q85rh" is not "Ready", error: <nil>
	W1025 10:34:24.266945  488429 pod_ready.go:104] pod "coredns-66bc5c9577-q85rh" is not "Ready", error: <nil>
	W1025 10:34:26.768782  488429 pod_ready.go:104] pod "coredns-66bc5c9577-q85rh" is not "Ready", error: <nil>
	W1025 10:34:29.264021  488429 pod_ready.go:104] pod "coredns-66bc5c9577-q85rh" is not "Ready", error: <nil>
	W1025 10:34:31.269798  488429 pod_ready.go:104] pod "coredns-66bc5c9577-q85rh" is not "Ready", error: <nil>
	W1025 10:34:33.767892  488429 pod_ready.go:104] pod "coredns-66bc5c9577-q85rh" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 25 10:34:19 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:19.394243164Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:34:19 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:19.410015834Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:34:19 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:19.410526521Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:34:19 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:19.432647982Z" level=info msg="Created container 0203f322602e7be00d05eff27792fd64d0786bf638a9f1139e07118ba2b1b674: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d7tbs/dashboard-metrics-scraper" id=45e51b3d-f8e3-401f-8ea1-1add4eca70c6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:34:19 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:19.43690928Z" level=info msg="Starting container: 0203f322602e7be00d05eff27792fd64d0786bf638a9f1139e07118ba2b1b674" id=cfe5878a-1b52-44a4-af27-efa5f9d8419f name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:34:19 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:19.444247322Z" level=info msg="Started container" PID=1668 containerID=0203f322602e7be00d05eff27792fd64d0786bf638a9f1139e07118ba2b1b674 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d7tbs/dashboard-metrics-scraper id=cfe5878a-1b52-44a4-af27-efa5f9d8419f name=/runtime.v1.RuntimeService/StartContainer sandboxID=0fa887aaa0c6a33e0eb2a41248e3ab52fdb6ca301370f4e4685bb242b4cc5865
	Oct 25 10:34:19 default-k8s-diff-port-204074 conmon[1666]: conmon 0203f322602e7be00d05 <ninfo>: container 1668 exited with status 1
	Oct 25 10:34:19 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:19.683566272Z" level=info msg="Removing container: fd5906f5a827a0fa41988f8c17cb5c8919d14478fb01d6184ff0560c02f0fd55" id=db5b3092-7ee2-4451-8e19-70b931fe9d96 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:34:19 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:19.698916009Z" level=info msg="Error loading conmon cgroup of container fd5906f5a827a0fa41988f8c17cb5c8919d14478fb01d6184ff0560c02f0fd55: cgroup deleted" id=db5b3092-7ee2-4451-8e19-70b931fe9d96 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:34:19 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:19.713562145Z" level=info msg="Removed container fd5906f5a827a0fa41988f8c17cb5c8919d14478fb01d6184ff0560c02f0fd55: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d7tbs/dashboard-metrics-scraper" id=db5b3092-7ee2-4451-8e19-70b931fe9d96 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:34:20 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:20.705712981Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:34:20 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:20.709886441Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:34:20 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:20.710096898Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:34:20 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:20.710131746Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:34:20 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:20.713662086Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:34:20 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:20.713698468Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:34:20 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:20.713724561Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:34:20 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:20.716829975Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:34:20 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:20.716859448Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:34:20 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:20.716882874Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:34:20 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:20.721595896Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:34:20 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:20.721831666Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:34:20 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:20.721942158Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:34:20 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:20.727344357Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:34:20 default-k8s-diff-port-204074 crio[649]: time="2025-10-25T10:34:20.727376226Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	0203f322602e7       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago       Exited              dashboard-metrics-scraper   2                   0fa887aaa0c6a       dashboard-metrics-scraper-6ffb444bf9-d7tbs             kubernetes-dashboard
	db21d161dfeef       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           23 seconds ago       Running             storage-provisioner         2                   369b5a02ee88e       storage-provisioner                                    kube-system
	44c36eb0d3af1       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   44 seconds ago       Running             kubernetes-dashboard        0                   c012f18fe79f1       kubernetes-dashboard-855c9754f9-cf6hc                  kubernetes-dashboard
	493310f9ab129       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           54 seconds ago       Running             coredns                     1                   26361de514979       coredns-66bc5c9577-hwczp                               kube-system
	4ca36797ef20a       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   1eaaa24814c47       busybox                                                default
	f79c465a2b069       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           54 seconds ago       Running             kindnet-cni                 1                   a464bbcd0629a       kindnet-pt5xf                                          kube-system
	86929effcce55       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           54 seconds ago       Exited              storage-provisioner         1                   369b5a02ee88e       storage-provisioner                                    kube-system
	b1bd8af177626       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           54 seconds ago       Running             kube-proxy                  1                   7f2eb00e51d70       kube-proxy-qcgkj                                       kube-system
	4ecd5c6991209       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   9e6a16f4b104b       kube-scheduler-default-k8s-diff-port-204074            kube-system
	802d4fb83a2b9       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   254d2a5b24857       etcd-default-k8s-diff-port-204074                      kube-system
	357c1c33e5336       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   8ef4cbc2ebaa5       kube-apiserver-default-k8s-diff-port-204074            kube-system
	cf19925569a9e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   6b3482d453c86       kube-controller-manager-default-k8s-diff-port-204074   kube-system
	
	
	==> coredns [493310f9ab129a6e1d6281430845b4e12fbe0244899d8780b7e4d8dca312849b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47191 - 22502 "HINFO IN 8345633951952800569.1020371666462564740. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021608657s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-204074
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-204074
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=default-k8s-diff-port-204074
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_32_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:32:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-204074
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:34:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:34:09 +0000   Sat, 25 Oct 2025 10:32:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:34:09 +0000   Sat, 25 Oct 2025 10:32:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:34:09 +0000   Sat, 25 Oct 2025 10:32:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:34:09 +0000   Sat, 25 Oct 2025 10:33:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-204074
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                fedca12f-f823-4d61-b723-4e847b2985b6
	  Boot ID:                    3729fb0b-b441-4078-a4f7-ae0fb40e9fa4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-hwczp                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m17s
	  kube-system                 etcd-default-k8s-diff-port-204074                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m20s
	  kube-system                 kindnet-pt5xf                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m17s
	  kube-system                 kube-apiserver-default-k8s-diff-port-204074             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-204074    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-proxy-qcgkj                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-scheduler-default-k8s-diff-port-204074             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m15s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-d7tbs              0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-cf6hc                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m15s                  kube-proxy       
	  Normal   Starting                 53s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m33s (x8 over 2m33s)  kubelet          Node default-k8s-diff-port-204074 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m33s (x8 over 2m33s)  kubelet          Node default-k8s-diff-port-204074 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m33s (x8 over 2m33s)  kubelet          Node default-k8s-diff-port-204074 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m21s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m21s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m20s                  kubelet          Node default-k8s-diff-port-204074 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m20s                  kubelet          Node default-k8s-diff-port-204074 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m20s                  kubelet          Node default-k8s-diff-port-204074 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m18s                  node-controller  Node default-k8s-diff-port-204074 event: Registered Node default-k8s-diff-port-204074 in Controller
	  Normal   NodeReady                95s                    kubelet          Node default-k8s-diff-port-204074 status is now: NodeReady
	  Normal   Starting                 62s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s (x8 over 62s)      kubelet          Node default-k8s-diff-port-204074 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x8 over 62s)      kubelet          Node default-k8s-diff-port-204074 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x8 over 62s)      kubelet          Node default-k8s-diff-port-204074 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           52s                    node-controller  Node default-k8s-diff-port-204074 event: Registered Node default-k8s-diff-port-204074 in Controller
	
	
	==> dmesg <==
	[Oct25 10:12] overlayfs: idmapped layers are currently not supported
	[Oct25 10:13] overlayfs: idmapped layers are currently not supported
	[Oct25 10:14] overlayfs: idmapped layers are currently not supported
	[Oct25 10:15] overlayfs: idmapped layers are currently not supported
	[ +16.057450] overlayfs: idmapped layers are currently not supported
	[Oct25 10:16] overlayfs: idmapped layers are currently not supported
	[ +24.917476] overlayfs: idmapped layers are currently not supported
	[Oct25 10:17] overlayfs: idmapped layers are currently not supported
	[ +27.290615] overlayfs: idmapped layers are currently not supported
	[Oct25 10:19] overlayfs: idmapped layers are currently not supported
	[Oct25 10:20] overlayfs: idmapped layers are currently not supported
	[Oct25 10:21] overlayfs: idmapped layers are currently not supported
	[Oct25 10:22] overlayfs: idmapped layers are currently not supported
	[ +31.000692] overlayfs: idmapped layers are currently not supported
	[Oct25 10:24] overlayfs: idmapped layers are currently not supported
	[Oct25 10:26] overlayfs: idmapped layers are currently not supported
	[Oct25 10:27] overlayfs: idmapped layers are currently not supported
	[Oct25 10:28] overlayfs: idmapped layers are currently not supported
	[ +15.077774] overlayfs: idmapped layers are currently not supported
	[Oct25 10:29] overlayfs: idmapped layers are currently not supported
	[Oct25 10:30] overlayfs: idmapped layers are currently not supported
	[Oct25 10:32] overlayfs: idmapped layers are currently not supported
	[ +37.840172] overlayfs: idmapped layers are currently not supported
	[Oct25 10:33] overlayfs: idmapped layers are currently not supported
	[Oct25 10:34] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [802d4fb83a2b952f13deb4266ef1896d827f97ddd11eae2520744994b5769f3e] <==
	{"level":"warn","ts":"2025-10-25T10:33:36.421574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:36.439202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:36.465929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:36.486175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:36.547444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:36.613212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:36.683671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:36.695243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:36.736647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:36.791555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:36.841432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:36.896165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:36.920374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:36.973895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:37.012417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:37.038830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:37.064507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:37.109098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:37.127544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:37.158515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:37.192534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:37.226519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:37.263025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:33:37.397236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53956","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-25T10:33:39.372536Z","caller":"traceutil/trace.go:172","msg":"trace[873151183] transaction","detail":"{read_only:false; number_of_response:0; response_revision:488; }","duration":"112.531893ms","start":"2025-10-25T10:33:39.259988Z","end":"2025-10-25T10:33:39.372520Z","steps":["trace[873151183] 'process raft request'  (duration: 112.411473ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:34:35 up  2:17,  0 user,  load average: 3.68, 3.63, 3.16
	Linux default-k8s-diff-port-204074 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f79c465a2b069953f8a630b74e3dc39ad7ac142a9b4f29869e4868e73798c34b] <==
	I1025 10:33:40.481774       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:33:40.482503       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 10:33:40.482636       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:33:40.482649       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:33:40.482663       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:33:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:33:40.705528       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:33:40.705545       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:33:40.705553       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:33:40.705817       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 10:34:10.705555       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 10:34:10.705722       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 10:34:10.705802       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 10:34:10.707061       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1025 10:34:12.105854       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:34:12.105902       1 metrics.go:72] Registering metrics
	I1025 10:34:12.105998       1 controller.go:711] "Syncing nftables rules"
	I1025 10:34:20.704694       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:34:20.704863       1 main.go:301] handling current node
	I1025 10:34:30.711089       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:34:30.711123       1 main.go:301] handling current node
	
	
	==> kube-apiserver [357c1c33e5336db1d9aacea8e98741b1db7d0a5f46bb4c275e97202edaa35037] <==
	I1025 10:33:39.258068       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 10:33:39.258102       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 10:33:39.258207       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1025 10:33:39.258253       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 10:33:39.258327       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1025 10:33:39.258405       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 10:33:39.262142       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1025 10:33:39.262633       1 aggregator.go:171] initial CRD sync complete...
	I1025 10:33:39.262654       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 10:33:39.262661       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 10:33:39.262667       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:33:39.280749       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:33:39.339915       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1025 10:33:39.399486       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 10:33:39.515687       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:33:39.658670       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:33:40.199101       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 10:33:40.381216       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:33:40.463134       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:33:40.499237       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:33:40.864628       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.5.98"}
	I1025 10:33:40.973847       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.25.171"}
	I1025 10:33:43.316271       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:33:43.666177       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:33:43.797506       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [cf19925569a9e3157327f48321ecad645bed37c06789fbc66df79fd9cf9c8310] <==
	I1025 10:33:43.212506       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 10:33:43.219649       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 10:33:43.221836       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 10:33:43.221925       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:33:43.224128       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 10:33:43.227455       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 10:33:43.230115       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 10:33:43.237442       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:33:43.240792       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:33:43.244933       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 10:33:43.247208       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 10:33:43.254615       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:33:43.254641       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 10:33:43.254648       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 10:33:43.257774       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 10:33:43.257946       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 10:33:43.258305       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 10:33:43.258554       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 10:33:43.258608       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 10:33:43.259444       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 10:33:43.259681       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1025 10:33:43.259714       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 10:33:43.259730       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 10:33:43.266405       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1025 10:33:43.266481       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	
	
	==> kube-proxy [b1bd8af17762678ba6f7830c709ab99400853ea3f02ac350b0be0e566844077c] <==
	I1025 10:33:41.223540       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:33:41.332900       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:33:41.441861       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:33:41.442062       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1025 10:33:41.442206       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:33:41.480903       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:33:41.481016       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:33:41.485487       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:33:41.485971       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:33:41.486183       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:33:41.487670       1 config.go:200] "Starting service config controller"
	I1025 10:33:41.488598       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:33:41.489812       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:33:41.489881       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:33:41.489921       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:33:41.489960       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:33:41.494060       1 config.go:309] "Starting node config controller"
	I1025 10:33:41.494137       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:33:41.494176       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:33:41.590409       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 10:33:41.590474       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 10:33:41.590409       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [4ecd5c6991209a440ef676eded1a237dc4635cc52d88167118cd3ff569d669ed] <==
	I1025 10:33:37.134929       1 serving.go:386] Generated self-signed cert in-memory
	I1025 10:33:41.499281       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 10:33:41.499468       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:33:41.508016       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 10:33:41.508281       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1025 10:33:41.508332       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1025 10:33:41.508388       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 10:33:41.511883       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:33:41.512192       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:33:41.512255       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:33:41.512265       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:33:41.608944       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1025 10:33:41.612307       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:33:41.613093       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:33:43 default-k8s-diff-port-204074 kubelet[775]: I1025 10:33:43.902458     775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-552g6\" (UniqueName: \"kubernetes.io/projected/63248964-f275-4a0a-af79-0a05bd9965bb-kube-api-access-552g6\") pod \"kubernetes-dashboard-855c9754f9-cf6hc\" (UID: \"63248964-f275-4a0a-af79-0a05bd9965bb\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cf6hc"
	Oct 25 10:33:43 default-k8s-diff-port-204074 kubelet[775]: I1025 10:33:43.902483     775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wd9b\" (UniqueName: \"kubernetes.io/projected/9a6f006d-b817-47bd-9c92-a78a2188f301-kube-api-access-7wd9b\") pod \"dashboard-metrics-scraper-6ffb444bf9-d7tbs\" (UID: \"9a6f006d-b817-47bd-9c92-a78a2188f301\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d7tbs"
	Oct 25 10:33:43 default-k8s-diff-port-204074 kubelet[775]: I1025 10:33:43.902507     775 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/63248964-f275-4a0a-af79-0a05bd9965bb-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-cf6hc\" (UID: \"63248964-f275-4a0a-af79-0a05bd9965bb\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cf6hc"
	Oct 25 10:33:44 default-k8s-diff-port-204074 kubelet[775]: W1025 10:33:44.716587     775 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/114adef2e3f9f4d970639b5c8a68c00b64371efcafc97f0bb2c3652589f1f63a/crio-c012f18fe79f12eb75a50ca3323a3ae5c218a535da19c4c9d815533d6182726e WatchSource:0}: Error finding container c012f18fe79f12eb75a50ca3323a3ae5c218a535da19c4c9d815533d6182726e: Status 404 returned error can't find the container with id c012f18fe79f12eb75a50ca3323a3ae5c218a535da19c4c9d815533d6182726e
	Oct 25 10:33:44 default-k8s-diff-port-204074 kubelet[775]: W1025 10:33:44.734521     775 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/114adef2e3f9f4d970639b5c8a68c00b64371efcafc97f0bb2c3652589f1f63a/crio-0fa887aaa0c6a33e0eb2a41248e3ab52fdb6ca301370f4e4685bb242b4cc5865 WatchSource:0}: Error finding container 0fa887aaa0c6a33e0eb2a41248e3ab52fdb6ca301370f4e4685bb242b4cc5865: Status 404 returned error can't find the container with id 0fa887aaa0c6a33e0eb2a41248e3ab52fdb6ca301370f4e4685bb242b4cc5865
	Oct 25 10:33:45 default-k8s-diff-port-204074 kubelet[775]: I1025 10:33:45.073868     775 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 25 10:33:53 default-k8s-diff-port-204074 kubelet[775]: I1025 10:33:53.439248     775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cf6hc" podStartSLOduration=4.689306287 podStartE2EDuration="10.439227875s" podCreationTimestamp="2025-10-25 10:33:43 +0000 UTC" firstStartedPulling="2025-10-25 10:33:44.719969716 +0000 UTC m=+11.484197010" lastFinishedPulling="2025-10-25 10:33:50.469891295 +0000 UTC m=+17.234118598" observedRunningTime="2025-10-25 10:33:51.627006666 +0000 UTC m=+18.391233969" watchObservedRunningTime="2025-10-25 10:33:53.439227875 +0000 UTC m=+20.203455178"
	Oct 25 10:33:55 default-k8s-diff-port-204074 kubelet[775]: I1025 10:33:55.606201     775 scope.go:117] "RemoveContainer" containerID="8c2df29c4c7049269ca5a6916a4a4d67a6c6811911f3737d936da9da459d9e71"
	Oct 25 10:33:56 default-k8s-diff-port-204074 kubelet[775]: I1025 10:33:56.611029     775 scope.go:117] "RemoveContainer" containerID="8c2df29c4c7049269ca5a6916a4a4d67a6c6811911f3737d936da9da459d9e71"
	Oct 25 10:33:56 default-k8s-diff-port-204074 kubelet[775]: I1025 10:33:56.611677     775 scope.go:117] "RemoveContainer" containerID="fd5906f5a827a0fa41988f8c17cb5c8919d14478fb01d6184ff0560c02f0fd55"
	Oct 25 10:33:56 default-k8s-diff-port-204074 kubelet[775]: E1025 10:33:56.611928     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-d7tbs_kubernetes-dashboard(9a6f006d-b817-47bd-9c92-a78a2188f301)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d7tbs" podUID="9a6f006d-b817-47bd-9c92-a78a2188f301"
	Oct 25 10:33:57 default-k8s-diff-port-204074 kubelet[775]: I1025 10:33:57.615454     775 scope.go:117] "RemoveContainer" containerID="fd5906f5a827a0fa41988f8c17cb5c8919d14478fb01d6184ff0560c02f0fd55"
	Oct 25 10:33:57 default-k8s-diff-port-204074 kubelet[775]: E1025 10:33:57.615609     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-d7tbs_kubernetes-dashboard(9a6f006d-b817-47bd-9c92-a78a2188f301)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d7tbs" podUID="9a6f006d-b817-47bd-9c92-a78a2188f301"
	Oct 25 10:34:04 default-k8s-diff-port-204074 kubelet[775]: I1025 10:34:04.692209     775 scope.go:117] "RemoveContainer" containerID="fd5906f5a827a0fa41988f8c17cb5c8919d14478fb01d6184ff0560c02f0fd55"
	Oct 25 10:34:04 default-k8s-diff-port-204074 kubelet[775]: E1025 10:34:04.693016     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-d7tbs_kubernetes-dashboard(9a6f006d-b817-47bd-9c92-a78a2188f301)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d7tbs" podUID="9a6f006d-b817-47bd-9c92-a78a2188f301"
	Oct 25 10:34:11 default-k8s-diff-port-204074 kubelet[775]: I1025 10:34:11.652112     775 scope.go:117] "RemoveContainer" containerID="86929effcce559f05f40fcb87fb161dcf0cc18a45ed38b458b52dd1a4a6ce189"
	Oct 25 10:34:19 default-k8s-diff-port-204074 kubelet[775]: I1025 10:34:19.386143     775 scope.go:117] "RemoveContainer" containerID="fd5906f5a827a0fa41988f8c17cb5c8919d14478fb01d6184ff0560c02f0fd55"
	Oct 25 10:34:19 default-k8s-diff-port-204074 kubelet[775]: I1025 10:34:19.675990     775 scope.go:117] "RemoveContainer" containerID="fd5906f5a827a0fa41988f8c17cb5c8919d14478fb01d6184ff0560c02f0fd55"
	Oct 25 10:34:19 default-k8s-diff-port-204074 kubelet[775]: I1025 10:34:19.676299     775 scope.go:117] "RemoveContainer" containerID="0203f322602e7be00d05eff27792fd64d0786bf638a9f1139e07118ba2b1b674"
	Oct 25 10:34:19 default-k8s-diff-port-204074 kubelet[775]: E1025 10:34:19.676451     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-d7tbs_kubernetes-dashboard(9a6f006d-b817-47bd-9c92-a78a2188f301)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d7tbs" podUID="9a6f006d-b817-47bd-9c92-a78a2188f301"
	Oct 25 10:34:24 default-k8s-diff-port-204074 kubelet[775]: I1025 10:34:24.692850     775 scope.go:117] "RemoveContainer" containerID="0203f322602e7be00d05eff27792fd64d0786bf638a9f1139e07118ba2b1b674"
	Oct 25 10:34:24 default-k8s-diff-port-204074 kubelet[775]: E1025 10:34:24.693468     775 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-d7tbs_kubernetes-dashboard(9a6f006d-b817-47bd-9c92-a78a2188f301)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d7tbs" podUID="9a6f006d-b817-47bd-9c92-a78a2188f301"
	Oct 25 10:34:29 default-k8s-diff-port-204074 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 10:34:29 default-k8s-diff-port-204074 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 10:34:29 default-k8s-diff-port-204074 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [44c36eb0d3af1c70f460f7ff95b889938f58449578c41fdc1a6f5a428c39018d] <==
	2025/10/25 10:33:50 Using namespace: kubernetes-dashboard
	2025/10/25 10:33:50 Using in-cluster config to connect to apiserver
	2025/10/25 10:33:50 Using secret token for csrf signing
	2025/10/25 10:33:50 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 10:33:50 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 10:33:50 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 10:33:50 Generating JWE encryption key
	2025/10/25 10:33:50 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 10:33:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 10:33:51 Initializing JWE encryption key from synchronized object
	2025/10/25 10:33:51 Creating in-cluster Sidecar client
	2025/10/25 10:33:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:33:51 Serving insecurely on HTTP port: 9090
	2025/10/25 10:34:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:33:50 Starting overwatch
	
	
	==> storage-provisioner [86929effcce559f05f40fcb87fb161dcf0cc18a45ed38b458b52dd1a4a6ce189] <==
	I1025 10:33:40.978286       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 10:34:10.990399       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [db21d161dfeef87ab0f7be598156f8aef8912dc979c9d322d68c986b0d00d2c6] <==
	I1025 10:34:11.730299       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 10:34:11.771824       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 10:34:11.772267       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 10:34:11.775527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:34:15.234567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:34:19.494824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:34:23.093568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:34:26.147476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:34:29.176598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:34:29.193509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 10:34:29.193660       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 10:34:29.193838       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-204074_8d434304-0fdb-4068-a42e-d2b7c2da6dca!
	I1025 10:34:29.193885       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0b6162f7-ef21-4da6-838b-9cd22ec3453b", APIVersion:"v1", ResourceVersion:"679", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-204074_8d434304-0fdb-4068-a42e-d2b7c2da6dca became leader
	W1025 10:34:29.210937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:34:29.233861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 10:34:29.294363       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-204074_8d434304-0fdb-4068-a42e-d2b7c2da6dca!
	W1025 10:34:31.245520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:34:31.250838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:34:33.254596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:34:33.262746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:34:35.266331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:34:35.272236       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-204074 -n default-k8s-diff-port-204074
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-204074 -n default-k8s-diff-port-204074: exit status 2 (371.727818ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-204074 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-419185 --alsologtostderr -v=1
E1025 10:35:08.026561  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-419185 --alsologtostderr -v=1: exit status 80 (2.073763416s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-419185 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 10:35:06.218092  494563 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:35:06.218461  494563 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:35:06.218471  494563 out.go:374] Setting ErrFile to fd 2...
	I1025 10:35:06.218476  494563 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:35:06.218722  494563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 10:35:06.218988  494563 out.go:368] Setting JSON to false
	I1025 10:35:06.219014  494563 mustload.go:65] Loading cluster: embed-certs-419185
	I1025 10:35:06.219487  494563 config.go:182] Loaded profile config "embed-certs-419185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:35:06.219970  494563 cli_runner.go:164] Run: docker container inspect embed-certs-419185 --format={{.State.Status}}
	I1025 10:35:06.247230  494563 host.go:66] Checking if "embed-certs-419185" exists ...
	I1025 10:35:06.247532  494563 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:35:06.351778  494563 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:true NGoroutines:78 SystemTime:2025-10-25 10:35:06.340979841 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:35:06.352500  494563 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-419185 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1025 10:35:06.357509  494563 out.go:179] * Pausing node embed-certs-419185 ... 
	I1025 10:35:06.362609  494563 host.go:66] Checking if "embed-certs-419185" exists ...
	I1025 10:35:06.362947  494563 ssh_runner.go:195] Run: systemctl --version
	I1025 10:35:06.362992  494563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-419185
	I1025 10:35:06.383092  494563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33447 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/embed-certs-419185/id_rsa Username:docker}
	I1025 10:35:06.498054  494563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:35:06.521480  494563 pause.go:52] kubelet running: true
	I1025 10:35:06.521549  494563 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:35:06.843020  494563 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:35:06.843116  494563 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:35:06.920434  494563 cri.go:89] found id: "2b1f385d3a5d618d9d725f0db4c64e38c43f1084e3fb506564a0e1fd718c2587"
	I1025 10:35:06.920462  494563 cri.go:89] found id: "24ede3e861b571b41dacad659ea362061f94f90095e464ea06917f9e1f4b828b"
	I1025 10:35:06.920467  494563 cri.go:89] found id: "fdd9e1e639fff0a8eae4c6115f0fe7c18321833449082075f7eab0ca237869cc"
	I1025 10:35:06.920471  494563 cri.go:89] found id: "b9b56386599ed53148fe4edb01fdb3a09ac28c031475b6f0f910103b06e5915e"
	I1025 10:35:06.920474  494563 cri.go:89] found id: "0ae68418f11b666da7da5e8a9533b93c71476592288f12ee5e2240252976f3a9"
	I1025 10:35:06.920478  494563 cri.go:89] found id: "fa7fdbde79e116585ff7bd6892d6145e4f4dbd9d48734b75cf7c4527c5f3dd33"
	I1025 10:35:06.920481  494563 cri.go:89] found id: "5d13bdf1233c74d26d6840451eeed0128e78110075227087c41d9d2ef0a3b0c1"
	I1025 10:35:06.920484  494563 cri.go:89] found id: "f217878a1e424333492789b8a51f60ae7e258ef0746c75ef438b3edd64069f81"
	I1025 10:35:06.920487  494563 cri.go:89] found id: "e175f67ced2debe1beebce72628c6856d49efdd71fbee71a4a521e5cb4728c33"
	I1025 10:35:06.920494  494563 cri.go:89] found id: "585aabbe498b8bf4b668c8c4429fb36aefd9683afaa99faabe55a1b9e126f4c9"
	I1025 10:35:06.920497  494563 cri.go:89] found id: "06b3300bd29e97068f4dd4ed1769a529ce119164f2e4915858c3b1bcd3c78d18"
	I1025 10:35:06.920500  494563 cri.go:89] found id: ""
	I1025 10:35:06.920614  494563 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:35:06.933192  494563 retry.go:31] will retry after 252.967382ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:35:06Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:35:07.186622  494563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:35:07.215794  494563 pause.go:52] kubelet running: false
	I1025 10:35:07.215900  494563 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:35:07.454509  494563 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:35:07.454658  494563 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:35:07.565209  494563 cri.go:89] found id: "2b1f385d3a5d618d9d725f0db4c64e38c43f1084e3fb506564a0e1fd718c2587"
	I1025 10:35:07.565274  494563 cri.go:89] found id: "24ede3e861b571b41dacad659ea362061f94f90095e464ea06917f9e1f4b828b"
	I1025 10:35:07.565293  494563 cri.go:89] found id: "fdd9e1e639fff0a8eae4c6115f0fe7c18321833449082075f7eab0ca237869cc"
	I1025 10:35:07.565314  494563 cri.go:89] found id: "b9b56386599ed53148fe4edb01fdb3a09ac28c031475b6f0f910103b06e5915e"
	I1025 10:35:07.565344  494563 cri.go:89] found id: "0ae68418f11b666da7da5e8a9533b93c71476592288f12ee5e2240252976f3a9"
	I1025 10:35:07.565366  494563 cri.go:89] found id: "fa7fdbde79e116585ff7bd6892d6145e4f4dbd9d48734b75cf7c4527c5f3dd33"
	I1025 10:35:07.565387  494563 cri.go:89] found id: "5d13bdf1233c74d26d6840451eeed0128e78110075227087c41d9d2ef0a3b0c1"
	I1025 10:35:07.565406  494563 cri.go:89] found id: "f217878a1e424333492789b8a51f60ae7e258ef0746c75ef438b3edd64069f81"
	I1025 10:35:07.565434  494563 cri.go:89] found id: "e175f67ced2debe1beebce72628c6856d49efdd71fbee71a4a521e5cb4728c33"
	I1025 10:35:07.565459  494563 cri.go:89] found id: "585aabbe498b8bf4b668c8c4429fb36aefd9683afaa99faabe55a1b9e126f4c9"
	I1025 10:35:07.565479  494563 cri.go:89] found id: "06b3300bd29e97068f4dd4ed1769a529ce119164f2e4915858c3b1bcd3c78d18"
	I1025 10:35:07.565510  494563 cri.go:89] found id: ""
	I1025 10:35:07.565576  494563 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:35:07.580333  494563 retry.go:31] will retry after 286.099006ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:35:07Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:35:07.866820  494563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:35:07.881543  494563 pause.go:52] kubelet running: false
	I1025 10:35:07.881611  494563 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:35:08.106567  494563 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:35:08.106643  494563 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:35:08.186563  494563 cri.go:89] found id: "2b1f385d3a5d618d9d725f0db4c64e38c43f1084e3fb506564a0e1fd718c2587"
	I1025 10:35:08.186582  494563 cri.go:89] found id: "24ede3e861b571b41dacad659ea362061f94f90095e464ea06917f9e1f4b828b"
	I1025 10:35:08.186587  494563 cri.go:89] found id: "fdd9e1e639fff0a8eae4c6115f0fe7c18321833449082075f7eab0ca237869cc"
	I1025 10:35:08.186591  494563 cri.go:89] found id: "b9b56386599ed53148fe4edb01fdb3a09ac28c031475b6f0f910103b06e5915e"
	I1025 10:35:08.186595  494563 cri.go:89] found id: "0ae68418f11b666da7da5e8a9533b93c71476592288f12ee5e2240252976f3a9"
	I1025 10:35:08.186607  494563 cri.go:89] found id: "fa7fdbde79e116585ff7bd6892d6145e4f4dbd9d48734b75cf7c4527c5f3dd33"
	I1025 10:35:08.186611  494563 cri.go:89] found id: "5d13bdf1233c74d26d6840451eeed0128e78110075227087c41d9d2ef0a3b0c1"
	I1025 10:35:08.186614  494563 cri.go:89] found id: "f217878a1e424333492789b8a51f60ae7e258ef0746c75ef438b3edd64069f81"
	I1025 10:35:08.186617  494563 cri.go:89] found id: "e175f67ced2debe1beebce72628c6856d49efdd71fbee71a4a521e5cb4728c33"
	I1025 10:35:08.186626  494563 cri.go:89] found id: "585aabbe498b8bf4b668c8c4429fb36aefd9683afaa99faabe55a1b9e126f4c9"
	I1025 10:35:08.186630  494563 cri.go:89] found id: "06b3300bd29e97068f4dd4ed1769a529ce119164f2e4915858c3b1bcd3c78d18"
	I1025 10:35:08.186632  494563 cri.go:89] found id: ""
	I1025 10:35:08.186680  494563 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:35:08.202297  494563 out.go:203] 
	W1025 10:35:08.205711  494563 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:35:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:35:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 10:35:08.205734  494563 out.go:285] * 
	* 
	W1025 10:35:08.212851  494563 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 10:35:08.218198  494563 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-419185 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-419185
helpers_test.go:243: (dbg) docker inspect embed-certs-419185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1fda185b5ef1eb2faf4fa928e32967ade3a0a627d5653f4c7f4c57d474b9fefa",
	        "Created": "2025-10-25T10:32:21.18342263Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 488559,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:34:04.320676763Z",
	            "FinishedAt": "2025-10-25T10:34:03.248790349Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/1fda185b5ef1eb2faf4fa928e32967ade3a0a627d5653f4c7f4c57d474b9fefa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1fda185b5ef1eb2faf4fa928e32967ade3a0a627d5653f4c7f4c57d474b9fefa/hostname",
	        "HostsPath": "/var/lib/docker/containers/1fda185b5ef1eb2faf4fa928e32967ade3a0a627d5653f4c7f4c57d474b9fefa/hosts",
	        "LogPath": "/var/lib/docker/containers/1fda185b5ef1eb2faf4fa928e32967ade3a0a627d5653f4c7f4c57d474b9fefa/1fda185b5ef1eb2faf4fa928e32967ade3a0a627d5653f4c7f4c57d474b9fefa-json.log",
	        "Name": "/embed-certs-419185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-419185:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-419185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1fda185b5ef1eb2faf4fa928e32967ade3a0a627d5653f4c7f4c57d474b9fefa",
	                "LowerDir": "/var/lib/docker/overlay2/911a81a7ff1c2790f47bc97a79eabe5d1dbf9493b6cc35e93a7a32d7866a7cdf-init/diff:/var/lib/docker/overlay2/56244b4f60319ba406f5ebe406da3f45bd5764c167111669ae2d84794cf94518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/911a81a7ff1c2790f47bc97a79eabe5d1dbf9493b6cc35e93a7a32d7866a7cdf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/911a81a7ff1c2790f47bc97a79eabe5d1dbf9493b6cc35e93a7a32d7866a7cdf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/911a81a7ff1c2790f47bc97a79eabe5d1dbf9493b6cc35e93a7a32d7866a7cdf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-419185",
	                "Source": "/var/lib/docker/volumes/embed-certs-419185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-419185",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-419185",
	                "name.minikube.sigs.k8s.io": "embed-certs-419185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9426b7a1179cac1aec836f59e9cc27f57719da01b70042b18b2642e2cd3edcda",
	            "SandboxKey": "/var/run/docker/netns/9426b7a1179c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-419185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:16:c2:c4:45:c7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2813ac098a0563027aa465aa29bfe18ee37b22086f641503f6265d21106417e7",
	                    "EndpointID": "1fce494b5e3a7fe3b240b0382b47a867470290fa0063b785ce7d1eeef2cb662a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-419185",
	                        "1fda185b5ef1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-419185 -n embed-certs-419185
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-419185 -n embed-certs-419185: exit status 2 (446.717857ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-419185 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-419185 logs -n 25: (1.664800829s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-610853 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:30 UTC │ 25 Oct 25 10:31 UTC │
	│ image   │ old-k8s-version-610853 image list --format=json                                                                                                                                                                                               │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │ 25 Oct 25 10:31 UTC │
	│ pause   │ -p old-k8s-version-610853 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │                     │
	│ delete  │ -p old-k8s-version-610853                                                                                                                                                                                                                     │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │ 25 Oct 25 10:31 UTC │
	│ delete  │ -p old-k8s-version-610853                                                                                                                                                                                                                     │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │ 25 Oct 25 10:31 UTC │
	│ start   │ -p default-k8s-diff-port-204074 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │ 25 Oct 25 10:33 UTC │
	│ start   │ -p cert-expiration-313068 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-313068       │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │ 25 Oct 25 10:32 UTC │
	│ delete  │ -p cert-expiration-313068                                                                                                                                                                                                                     │ cert-expiration-313068       │ jenkins │ v1.37.0 │ 25 Oct 25 10:32 UTC │ 25 Oct 25 10:32 UTC │
	│ start   │ -p embed-certs-419185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:32 UTC │ 25 Oct 25 10:33 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-204074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-204074 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │ 25 Oct 25 10:33 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-204074 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │ 25 Oct 25 10:33 UTC │
	│ start   │ -p default-k8s-diff-port-204074 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │ 25 Oct 25 10:34 UTC │
	│ addons  │ enable metrics-server -p embed-certs-419185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │                     │
	│ stop    │ -p embed-certs-419185 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │ 25 Oct 25 10:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-419185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ start   │ -p embed-certs-419185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ image   │ default-k8s-diff-port-204074 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ pause   │ -p default-k8s-diff-port-204074 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-204074                                                                                                                                                                                                               │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ delete  │ -p default-k8s-diff-port-204074                                                                                                                                                                                                               │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ delete  │ -p disable-driver-mounts-533631                                                                                                                                                                                                               │ disable-driver-mounts-533631 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ start   │ -p no-preload-768303 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-768303            │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │                     │
	│ image   │ embed-certs-419185 image list --format=json                                                                                                                                                                                                   │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │ 25 Oct 25 10:35 UTC │
	│ pause   │ -p embed-certs-419185 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:34:39
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:34:39.748518  492025 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:34:39.748728  492025 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:34:39.748757  492025 out.go:374] Setting ErrFile to fd 2...
	I1025 10:34:39.748777  492025 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:34:39.749140  492025 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 10:34:39.749639  492025 out.go:368] Setting JSON to false
	I1025 10:34:39.750883  492025 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8230,"bootTime":1761380250,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1025 10:34:39.750994  492025 start.go:141] virtualization:  
	I1025 10:34:39.756991  492025 out.go:179] * [no-preload-768303] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:34:39.761733  492025 notify.go:220] Checking for updates...
	I1025 10:34:39.762257  492025 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 10:34:39.765831  492025 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:34:39.769112  492025 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:34:39.772300  492025 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-292167/.minikube
	I1025 10:34:39.775442  492025 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:34:39.778456  492025 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:34:39.782142  492025 config.go:182] Loaded profile config "embed-certs-419185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:34:39.782273  492025 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:34:39.814850  492025 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:34:39.815043  492025 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:34:39.880326  492025 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:34:39.868431492 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:34:39.880469  492025 docker.go:318] overlay module found
	I1025 10:34:39.883622  492025 out.go:179] * Using the docker driver based on user configuration
	I1025 10:34:39.886598  492025 start.go:305] selected driver: docker
	I1025 10:34:39.886634  492025 start.go:925] validating driver "docker" against <nil>
	I1025 10:34:39.886650  492025 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:34:39.887668  492025 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:34:39.949856  492025 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:34:39.939947941 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:34:39.950048  492025 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 10:34:39.950300  492025 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:34:39.954196  492025 out.go:179] * Using Docker driver with root privileges
	I1025 10:34:39.957149  492025 cni.go:84] Creating CNI manager for ""
	I1025 10:34:39.957221  492025 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:34:39.957229  492025 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 10:34:39.957325  492025 start.go:349] cluster config:
	{Name:no-preload-768303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-768303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:34:39.962724  492025 out.go:179] * Starting "no-preload-768303" primary control-plane node in "no-preload-768303" cluster
	I1025 10:34:39.965636  492025 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:34:39.968478  492025 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:34:39.971461  492025 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:34:39.971574  492025 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:34:39.971606  492025 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/config.json ...
	I1025 10:34:39.971641  492025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/config.json: {Name:mka01f7e8a098c7a1beb738ad84816d292c28e90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:34:39.971963  492025 cache.go:107] acquiring lock: {Name:mkcb674bf6bbc265e760bf8be116a57186608a58 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:34:39.972020  492025 cache.go:115] /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1025 10:34:39.972027  492025 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 72.321µs
	I1025 10:34:39.972035  492025 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1025 10:34:39.972045  492025 cache.go:107] acquiring lock: {Name:mk9facf4e59193f96d96012cf82ef7fef364093d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:34:39.972113  492025 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1025 10:34:39.972419  492025 cache.go:107] acquiring lock: {Name:mk1e264701efd819526cb1327aac37ba6383079c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:34:39.972555  492025 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 10:34:39.972799  492025 cache.go:107] acquiring lock: {Name:mk145e03dafbcb30f74a27f99b5fba1addf06371 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:34:39.972953  492025 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1025 10:34:39.973162  492025 cache.go:107] acquiring lock: {Name:mkb1799d37a5611969ac9809065db3c631238657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:34:39.973298  492025 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1025 10:34:39.973466  492025 cache.go:107] acquiring lock: {Name:mkd43195497e2780982a3de630a4cda8f1c812f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:34:39.973609  492025 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1025 10:34:39.973850  492025 cache.go:107] acquiring lock: {Name:mk92a2a5fb8dde9e51922a55162996cccaaf10a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:34:39.973961  492025 cache.go:107] acquiring lock: {Name:mk2866f59a9236262f732426434fc9bafb724b61 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:34:39.974002  492025 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1025 10:34:39.974100  492025 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1025 10:34:39.975457  492025 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1025 10:34:39.975953  492025 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1025 10:34:39.977030  492025 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 10:34:39.977281  492025 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1025 10:34:39.978197  492025 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1025 10:34:39.978773  492025 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1025 10:34:39.979281  492025 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1025 10:34:40.060512  492025 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:34:40.060612  492025 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:34:40.060663  492025 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:34:40.060696  492025 start.go:360] acquireMachinesLock for no-preload-768303: {Name:mkf575e11dd83318b723f79e28f313be28102c7c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:34:40.060916  492025 start.go:364] duration metric: took 164.441µs to acquireMachinesLock for "no-preload-768303"
	I1025 10:34:40.061019  492025 start.go:93] Provisioning new machine with config: &{Name:no-preload-768303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-768303 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:34:40.061147  492025 start.go:125] createHost starting for "" (driver="docker")
	W1025 10:34:41.265633  488429 pod_ready.go:104] pod "coredns-66bc5c9577-q85rh" is not "Ready", error: <nil>
	W1025 10:34:43.762675  488429 pod_ready.go:104] pod "coredns-66bc5c9577-q85rh" is not "Ready", error: <nil>
	I1025 10:34:40.065097  492025 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 10:34:40.065430  492025 start.go:159] libmachine.API.Create for "no-preload-768303" (driver="docker")
	I1025 10:34:40.065486  492025 client.go:168] LocalClient.Create starting
	I1025 10:34:40.065570  492025 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem
	I1025 10:34:40.065622  492025 main.go:141] libmachine: Decoding PEM data...
	I1025 10:34:40.065642  492025 main.go:141] libmachine: Parsing certificate...
	I1025 10:34:40.065713  492025 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem
	I1025 10:34:40.065739  492025 main.go:141] libmachine: Decoding PEM data...
	I1025 10:34:40.065761  492025 main.go:141] libmachine: Parsing certificate...
	I1025 10:34:40.066185  492025 cli_runner.go:164] Run: docker network inspect no-preload-768303 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 10:34:40.087117  492025 cli_runner.go:211] docker network inspect no-preload-768303 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 10:34:40.087259  492025 network_create.go:284] running [docker network inspect no-preload-768303] to gather additional debugging logs...
	I1025 10:34:40.087335  492025 cli_runner.go:164] Run: docker network inspect no-preload-768303
	W1025 10:34:40.108897  492025 cli_runner.go:211] docker network inspect no-preload-768303 returned with exit code 1
	I1025 10:34:40.108938  492025 network_create.go:287] error running [docker network inspect no-preload-768303]: docker network inspect no-preload-768303: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-768303 not found
	I1025 10:34:40.108953  492025 network_create.go:289] output of [docker network inspect no-preload-768303]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-768303 not found
	
	** /stderr **
	I1025 10:34:40.109075  492025 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:34:40.136206  492025 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-101b69e1e09b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d2:49:dd:18:ab:21} reservation:<nil>}
	I1025 10:34:40.136618  492025 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fee160f0176f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:62:3b:42:f8:f5:b2} reservation:<nil>}
	I1025 10:34:40.137004  492025 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5368f13f34ad IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2e:30:83:3d:80:30} reservation:<nil>}
	I1025 10:34:40.137388  492025 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-2813ac098a05 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:16:11:56:2a:1e:79} reservation:<nil>}
	I1025 10:34:40.137905  492025 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001e46090}
	I1025 10:34:40.137961  492025 network_create.go:124] attempt to create docker network no-preload-768303 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1025 10:34:40.138063  492025 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-768303 no-preload-768303
	I1025 10:34:40.218391  492025 network_create.go:108] docker network no-preload-768303 192.168.85.0/24 created
	I1025 10:34:40.218425  492025 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-768303" container
	I1025 10:34:40.218501  492025 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 10:34:40.236546  492025 cli_runner.go:164] Run: docker volume create no-preload-768303 --label name.minikube.sigs.k8s.io=no-preload-768303 --label created_by.minikube.sigs.k8s.io=true
	I1025 10:34:40.272283  492025 oci.go:103] Successfully created a docker volume no-preload-768303
	I1025 10:34:40.272430  492025 cli_runner.go:164] Run: docker run --rm --name no-preload-768303-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-768303 --entrypoint /usr/bin/test -v no-preload-768303:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 10:34:40.294054  492025 cache.go:162] opening:  /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1025 10:34:40.297218  492025 cache.go:162] opening:  /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1025 10:34:40.299571  492025 cache.go:162] opening:  /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1025 10:34:40.303790  492025 cache.go:162] opening:  /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1025 10:34:40.310716  492025 cache.go:162] opening:  /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1025 10:34:40.310944  492025 cache.go:162] opening:  /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1025 10:34:40.320755  492025 cache.go:162] opening:  /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1025 10:34:40.351444  492025 cache.go:157] /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1025 10:34:40.351470  492025 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 378.00723ms
	I1025 10:34:40.351481  492025 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1025 10:34:40.583018  492025 cache.go:157] /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1025 10:34:40.583104  492025 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 609.946623ms
	I1025 10:34:40.583194  492025 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1025 10:34:40.896768  492025 oci.go:107] Successfully prepared a docker volume no-preload-768303
	I1025 10:34:40.896804  492025 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1025 10:34:40.896950  492025 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1025 10:34:40.897056  492025 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 10:34:41.006669  492025 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-768303 --name no-preload-768303 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-768303 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-768303 --network no-preload-768303 --ip 192.168.85.2 --volume no-preload-768303:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 10:34:41.165506  492025 cache.go:157] /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1025 10:34:41.165762  492025 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.192962944s
	I1025 10:34:41.165859  492025 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1025 10:34:41.184285  492025 cache.go:157] /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1025 10:34:41.192769  492025 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.218804258s
	I1025 10:34:41.192794  492025 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1025 10:34:41.292076  492025 cache.go:157] /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1025 10:34:41.292104  492025 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.319691046s
	I1025 10:34:41.292123  492025 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1025 10:34:41.294156  492025 cache.go:157] /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1025 10:34:41.294192  492025 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.322145544s
	I1025 10:34:41.294203  492025 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1025 10:34:41.439265  492025 cli_runner.go:164] Run: docker container inspect no-preload-768303 --format={{.State.Running}}
	I1025 10:34:41.474967  492025 cli_runner.go:164] Run: docker container inspect no-preload-768303 --format={{.State.Status}}
	I1025 10:34:41.516378  492025 cli_runner.go:164] Run: docker exec no-preload-768303 stat /var/lib/dpkg/alternatives/iptables
	I1025 10:34:41.597814  492025 oci.go:144] the created container "no-preload-768303" has a running status.
	I1025 10:34:41.597845  492025 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21794-292167/.minikube/machines/no-preload-768303/id_rsa...
	I1025 10:34:41.983963  492025 cache.go:157] /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1025 10:34:41.983999  492025 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.01015107s
	I1025 10:34:41.984011  492025 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1025 10:34:41.984022  492025 cache.go:87] Successfully saved all images to host disk.
	I1025 10:34:42.389856  492025 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21794-292167/.minikube/machines/no-preload-768303/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 10:34:42.415283  492025 cli_runner.go:164] Run: docker container inspect no-preload-768303 --format={{.State.Status}}
	I1025 10:34:42.434990  492025 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 10:34:42.435012  492025 kic_runner.go:114] Args: [docker exec --privileged no-preload-768303 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 10:34:42.479288  492025 cli_runner.go:164] Run: docker container inspect no-preload-768303 --format={{.State.Status}}
	I1025 10:34:42.498195  492025 machine.go:93] provisionDockerMachine start ...
	I1025 10:34:42.498295  492025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:34:42.517039  492025 main.go:141] libmachine: Using SSH client type: native
	I1025 10:34:42.517540  492025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33452 <nil> <nil>}
	I1025 10:34:42.517557  492025 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:34:42.518499  492025 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58604->127.0.0.1:33452: read: connection reset by peer
	I1025 10:34:45.670998  492025 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-768303
	
	I1025 10:34:45.671023  492025 ubuntu.go:182] provisioning hostname "no-preload-768303"
	I1025 10:34:45.671132  492025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:34:45.689701  492025 main.go:141] libmachine: Using SSH client type: native
	I1025 10:34:45.690016  492025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33452 <nil> <nil>}
	I1025 10:34:45.690033  492025 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-768303 && echo "no-preload-768303" | sudo tee /etc/hostname
	I1025 10:34:45.850805  492025 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-768303
	
	I1025 10:34:45.850888  492025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:34:45.868758  492025 main.go:141] libmachine: Using SSH client type: native
	I1025 10:34:45.869075  492025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33452 <nil> <nil>}
	I1025 10:34:45.869099  492025 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-768303' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-768303/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-768303' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:34:46.023593  492025 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:34:46.023618  492025 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-292167/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-292167/.minikube}
	I1025 10:34:46.023636  492025 ubuntu.go:190] setting up certificates
	I1025 10:34:46.023645  492025 provision.go:84] configureAuth start
	I1025 10:34:46.023721  492025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-768303
	I1025 10:34:46.042373  492025 provision.go:143] copyHostCerts
	I1025 10:34:46.042449  492025 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem, removing ...
	I1025 10:34:46.042468  492025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem
	I1025 10:34:46.042552  492025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem (1675 bytes)
	I1025 10:34:46.042646  492025 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem, removing ...
	I1025 10:34:46.042658  492025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem
	I1025 10:34:46.042695  492025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem (1082 bytes)
	I1025 10:34:46.042761  492025 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem, removing ...
	I1025 10:34:46.042773  492025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem
	I1025 10:34:46.042800  492025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem (1123 bytes)
	I1025 10:34:46.042853  492025 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem org=jenkins.no-preload-768303 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-768303]
	I1025 10:34:46.313446  492025 provision.go:177] copyRemoteCerts
	I1025 10:34:46.313518  492025 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:34:46.313557  492025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:34:46.332035  492025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33452 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/no-preload-768303/id_rsa Username:docker}
	I1025 10:34:46.443627  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 10:34:46.462007  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 10:34:46.480054  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 10:34:46.498378  492025 provision.go:87] duration metric: took 474.707967ms to configureAuth
	I1025 10:34:46.498407  492025 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:34:46.498600  492025 config.go:182] Loaded profile config "no-preload-768303": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:34:46.498716  492025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:34:46.517287  492025 main.go:141] libmachine: Using SSH client type: native
	I1025 10:34:46.517634  492025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33452 <nil> <nil>}
	I1025 10:34:46.517658  492025 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:34:46.868055  492025 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:34:46.868076  492025 machine.go:96] duration metric: took 4.369857006s to provisionDockerMachine
	I1025 10:34:46.868085  492025 client.go:171] duration metric: took 6.802592461s to LocalClient.Create
	I1025 10:34:46.868096  492025 start.go:167] duration metric: took 6.802667753s to libmachine.API.Create "no-preload-768303"
	I1025 10:34:46.868103  492025 start.go:293] postStartSetup for "no-preload-768303" (driver="docker")
	I1025 10:34:46.868114  492025 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:34:46.868196  492025 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:34:46.868241  492025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:34:46.886858  492025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33452 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/no-preload-768303/id_rsa Username:docker}
	I1025 10:34:46.991335  492025 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:34:46.994663  492025 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:34:46.994689  492025 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:34:46.994700  492025 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/addons for local assets ...
	I1025 10:34:46.994763  492025 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/files for local assets ...
	I1025 10:34:46.994849  492025 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem -> 2940172.pem in /etc/ssl/certs
	I1025 10:34:46.994954  492025 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:34:47.004029  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 10:34:47.022582  492025 start.go:296] duration metric: took 154.465638ms for postStartSetup
	I1025 10:34:47.022951  492025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-768303
	I1025 10:34:47.043336  492025 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/config.json ...
	I1025 10:34:47.043635  492025 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:34:47.043683  492025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:34:47.062302  492025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33452 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/no-preload-768303/id_rsa Username:docker}
	I1025 10:34:47.164476  492025 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:34:47.169585  492025 start.go:128] duration metric: took 7.108421136s to createHost
	I1025 10:34:47.169609  492025 start.go:83] releasing machines lock for "no-preload-768303", held for 7.108677764s
	I1025 10:34:47.169684  492025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-768303
	I1025 10:34:47.187523  492025 ssh_runner.go:195] Run: cat /version.json
	I1025 10:34:47.187575  492025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:34:47.187606  492025 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:34:47.187670  492025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:34:47.212421  492025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33452 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/no-preload-768303/id_rsa Username:docker}
	I1025 10:34:47.229341  492025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33452 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/no-preload-768303/id_rsa Username:docker}
	I1025 10:34:47.319678  492025 ssh_runner.go:195] Run: systemctl --version
	I1025 10:34:47.425704  492025 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:34:47.463394  492025 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:34:47.467900  492025 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:34:47.467971  492025 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:34:47.519836  492025 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1025 10:34:47.519860  492025 start.go:495] detecting cgroup driver to use...
	I1025 10:34:47.519894  492025 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:34:47.519948  492025 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:34:47.546644  492025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:34:47.563402  492025 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:34:47.563470  492025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:34:47.582328  492025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:34:47.602471  492025 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:34:47.755248  492025 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:34:47.884960  492025 docker.go:234] disabling docker service ...
	I1025 10:34:47.885081  492025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:34:47.908849  492025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:34:47.922417  492025 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:34:48.049128  492025 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:34:48.175579  492025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:34:48.188846  492025 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:34:48.202716  492025 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:34:48.202783  492025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:34:48.211649  492025 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:34:48.211765  492025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:34:48.220912  492025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:34:48.230323  492025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:34:48.239739  492025 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:34:48.248328  492025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:34:48.259780  492025 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:34:48.275347  492025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:34:48.285571  492025 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:34:48.294033  492025 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:34:48.301710  492025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:34:48.421542  492025 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:34:48.570574  492025 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:34:48.570658  492025 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:34:48.574785  492025 start.go:563] Will wait 60s for crictl version
	I1025 10:34:48.574853  492025 ssh_runner.go:195] Run: which crictl
	I1025 10:34:48.578566  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:34:48.603946  492025 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:34:48.604039  492025 ssh_runner.go:195] Run: crio --version
	I1025 10:34:48.638554  492025 ssh_runner.go:195] Run: crio --version
	I1025 10:34:48.677488  492025 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1025 10:34:46.263726  488429 pod_ready.go:104] pod "coredns-66bc5c9577-q85rh" is not "Ready", error: <nil>
	W1025 10:34:48.264262  488429 pod_ready.go:104] pod "coredns-66bc5c9577-q85rh" is not "Ready", error: <nil>
	I1025 10:34:48.680391  492025 cli_runner.go:164] Run: docker network inspect no-preload-768303 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:34:48.696692  492025 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1025 10:34:48.700522  492025 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:34:48.710142  492025 kubeadm.go:883] updating cluster {Name:no-preload-768303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-768303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:34:48.710250  492025 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:34:48.710292  492025 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:34:48.735133  492025 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1025 10:34:48.735197  492025 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1025 10:34:48.735244  492025 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:34:48.735446  492025 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1025 10:34:48.735534  492025 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 10:34:48.735617  492025 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1025 10:34:48.735700  492025 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1025 10:34:48.735783  492025 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1025 10:34:48.735876  492025 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1025 10:34:48.735982  492025 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1025 10:34:48.736997  492025 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1025 10:34:48.737107  492025 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1025 10:34:48.737169  492025 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1025 10:34:48.737227  492025 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:34:48.738249  492025 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1025 10:34:48.738451  492025 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 10:34:48.738594  492025 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1025 10:34:48.738873  492025 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1025 10:34:49.002679  492025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1025 10:34:49.003259  492025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 10:34:49.008637  492025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1025 10:34:49.009175  492025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1025 10:34:49.010871  492025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1025 10:34:49.094967  492025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1025 10:34:49.138230  492025 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1025 10:34:49.138274  492025 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 10:34:49.138332  492025 ssh_runner.go:195] Run: which crictl
	I1025 10:34:49.138398  492025 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1025 10:34:49.138417  492025 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1025 10:34:49.138438  492025 ssh_runner.go:195] Run: which crictl
	I1025 10:34:49.142514  492025 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1025 10:34:49.142559  492025 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1025 10:34:49.142606  492025 ssh_runner.go:195] Run: which crictl
	I1025 10:34:49.142677  492025 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1025 10:34:49.142697  492025 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1025 10:34:49.142718  492025 ssh_runner.go:195] Run: which crictl
	I1025 10:34:49.151971  492025 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1025 10:34:49.152011  492025 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1025 10:34:49.152061  492025 ssh_runner.go:195] Run: which crictl
	I1025 10:34:49.157379  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1025 10:34:49.157445  492025 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1025 10:34:49.157481  492025 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1025 10:34:49.157512  492025 ssh_runner.go:195] Run: which crictl
	I1025 10:34:49.157563  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 10:34:49.160287  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1025 10:34:49.160658  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1025 10:34:49.163733  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1025 10:34:49.220575  492025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1025 10:34:49.253947  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 10:34:49.254104  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1025 10:34:49.254200  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1025 10:34:49.267118  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1025 10:34:49.267383  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1025 10:34:49.267479  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1025 10:34:49.300919  492025 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1025 10:34:49.300965  492025 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1025 10:34:49.301017  492025 ssh_runner.go:195] Run: which crictl
	I1025 10:34:49.376201  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1025 10:34:49.376268  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 10:34:49.376308  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1025 10:34:49.381321  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1025 10:34:49.381396  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1025 10:34:49.381451  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1025 10:34:49.381527  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1025 10:34:49.471702  492025 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1025 10:34:49.471805  492025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1025 10:34:49.471882  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1025 10:34:49.471927  492025 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1025 10:34:49.471979  492025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1025 10:34:49.503240  492025 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1025 10:34:49.503412  492025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1025 10:34:49.503512  492025 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1025 10:34:49.503590  492025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1025 10:34:49.503678  492025 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1025 10:34:49.503758  492025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1025 10:34:49.503883  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1025 10:34:49.521586  492025 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1025 10:34:49.521626  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1025 10:34:49.521686  492025 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1025 10:34:49.521702  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1025 10:34:49.521754  492025 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1025 10:34:49.521827  492025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1025 10:34:49.567414  492025 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1025 10:34:49.567458  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1025 10:34:49.567542  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1025 10:34:49.567585  492025 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1025 10:34:49.567601  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1025 10:34:49.567636  492025 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1025 10:34:49.567653  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1025 10:34:49.567689  492025 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1025 10:34:49.567703  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1025 10:34:49.726576  492025 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1025 10:34:49.726747  492025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	W1025 10:34:50.268182  488429 pod_ready.go:104] pod "coredns-66bc5c9577-q85rh" is not "Ready", error: <nil>
	I1025 10:34:52.763007  488429 pod_ready.go:94] pod "coredns-66bc5c9577-q85rh" is "Ready"
	I1025 10:34:52.763032  488429 pod_ready.go:86] duration metric: took 32.506926744s for pod "coredns-66bc5c9577-q85rh" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:52.767132  488429 pod_ready.go:83] waiting for pod "etcd-embed-certs-419185" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:52.773200  488429 pod_ready.go:94] pod "etcd-embed-certs-419185" is "Ready"
	I1025 10:34:52.773276  488429 pod_ready.go:86] duration metric: took 6.065799ms for pod "etcd-embed-certs-419185" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:52.776901  488429 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-419185" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:52.783181  488429 pod_ready.go:94] pod "kube-apiserver-embed-certs-419185" is "Ready"
	I1025 10:34:52.783255  488429 pod_ready.go:86] duration metric: took 6.282983ms for pod "kube-apiserver-embed-certs-419185" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:52.786004  488429 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-419185" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:52.964910  488429 pod_ready.go:94] pod "kube-controller-manager-embed-certs-419185" is "Ready"
	I1025 10:34:52.964988  488429 pod_ready.go:86] duration metric: took 178.91153ms for pod "kube-controller-manager-embed-certs-419185" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:53.159662  488429 pod_ready.go:83] waiting for pod "kube-proxy-2vqfc" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:53.559503  488429 pod_ready.go:94] pod "kube-proxy-2vqfc" is "Ready"
	I1025 10:34:53.559526  488429 pod_ready.go:86] duration metric: took 399.78635ms for pod "kube-proxy-2vqfc" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:53.759916  488429 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-419185" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:54.160296  488429 pod_ready.go:94] pod "kube-scheduler-embed-certs-419185" is "Ready"
	I1025 10:34:54.160324  488429 pod_ready.go:86] duration metric: took 400.386179ms for pod "kube-scheduler-embed-certs-419185" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:54.160338  488429 pod_ready.go:40] duration metric: took 33.910841121s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:34:54.235040  488429 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 10:34:54.238779  488429 out.go:179] * Done! kubectl is now configured to use "embed-certs-419185" cluster and "default" namespace by default
	I1025 10:34:49.770251  492025 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1025 10:34:49.770375  492025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1025 10:34:49.838408  492025 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1025 10:34:49.838513  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	W1025 10:34:50.045784  492025 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1025 10:34:50.045966  492025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:34:50.290603  492025 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1025 10:34:50.290694  492025 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:34:50.290774  492025 ssh_runner.go:195] Run: which crictl
	I1025 10:34:50.290853  492025 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1025 10:34:50.293193  492025 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1025 10:34:50.293265  492025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1025 10:34:50.336113  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:34:52.399760  492025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (2.106466746s)
	I1025 10:34:52.399789  492025 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1025 10:34:52.399809  492025 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1025 10:34:52.399859  492025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1025 10:34:52.399931  492025 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.063730272s)
	I1025 10:34:52.399972  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:34:54.097345  492025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.697460606s)
	I1025 10:34:54.097373  492025 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1025 10:34:54.097437  492025 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1025 10:34:54.097408  492025 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.697420728s)
	I1025 10:34:54.097593  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:34:54.097496  492025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1025 10:34:55.196477  492025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.098840852s)
	I1025 10:34:55.196500  492025 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.098887531s)
	I1025 10:34:55.196543  492025 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1025 10:34:55.196507  492025 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1025 10:34:55.196613  492025 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1025 10:34:55.196637  492025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1025 10:34:55.196657  492025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1025 10:34:56.631091  492025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.434405116s)
	I1025 10:34:56.631116  492025 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1025 10:34:56.631141  492025 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1025 10:34:56.631219  492025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1025 10:34:56.631296  492025 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.434648394s)
	I1025 10:34:56.631313  492025 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1025 10:34:56.631328  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1025 10:34:58.059268  492025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.428021212s)
	I1025 10:34:58.059294  492025 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1025 10:34:58.059330  492025 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1025 10:34:58.059386  492025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1025 10:35:02.158764  492025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (4.099355839s)
	I1025 10:35:02.158789  492025 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1025 10:35:02.158809  492025 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1025 10:35:02.158860  492025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1025 10:35:02.724738  492025 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1025 10:35:02.724781  492025 cache_images.go:124] Successfully loaded all cached images
	I1025 10:35:02.724789  492025 cache_images.go:93] duration metric: took 13.98957548s to LoadCachedImages
	I1025 10:35:02.724800  492025 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1025 10:35:02.724910  492025 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-768303 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-768303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:35:02.725000  492025 ssh_runner.go:195] Run: crio config
	I1025 10:35:02.802164  492025 cni.go:84] Creating CNI manager for ""
	I1025 10:35:02.802232  492025 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:35:02.802273  492025 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:35:02.802329  492025 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-768303 NodeName:no-preload-768303 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:35:02.802493  492025 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-768303"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:35:02.802581  492025 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:35:02.810709  492025 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1025 10:35:02.810795  492025 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1025 10:35:02.819288  492025 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1025 10:35:02.819391  492025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1025 10:35:02.819919  492025 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21794-292167/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1025 10:35:02.820371  492025 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21794-292167/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1025 10:35:02.823960  492025 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1025 10:35:02.823995  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1025 10:35:03.673439  492025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:35:03.687858  492025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1025 10:35:03.690980  492025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1025 10:35:03.692858  492025 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1025 10:35:03.692894  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1025 10:35:03.701705  492025 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1025 10:35:03.701749  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1025 10:35:04.348669  492025 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:35:04.357929  492025 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1025 10:35:04.372600  492025 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:35:04.387630  492025 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1025 10:35:04.402928  492025 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:35:04.407025  492025 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:35:04.421952  492025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:35:04.538870  492025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:35:04.556209  492025 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303 for IP: 192.168.85.2
	I1025 10:35:04.556249  492025 certs.go:195] generating shared ca certs ...
	I1025 10:35:04.556283  492025 certs.go:227] acquiring lock for ca certs: {Name:mk9438893ca511bbb3aa6154ee7b6f94b409696d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:35:04.556479  492025 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key
	I1025 10:35:04.556561  492025 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key
	I1025 10:35:04.556577  492025 certs.go:257] generating profile certs ...
	I1025 10:35:04.556661  492025 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/client.key
	I1025 10:35:04.556680  492025 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/client.crt with IP's: []
	I1025 10:35:04.784657  492025 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/client.crt ...
	I1025 10:35:04.784691  492025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/client.crt: {Name:mk96599ced2d7d0768690d083aec6c1c898aecac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:35:04.784939  492025 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/client.key ...
	I1025 10:35:04.784955  492025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/client.key: {Name:mk7c1f07aa13e94287c844d186ff4388b534d07f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:35:04.785099  492025 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/apiserver.key.a4ce95f1
	I1025 10:35:04.785120  492025 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/apiserver.crt.a4ce95f1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1025 10:35:05.125577  492025 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/apiserver.crt.a4ce95f1 ...
	I1025 10:35:05.125608  492025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/apiserver.crt.a4ce95f1: {Name:mk4fdc8ab16e6fe9bbd567d636f39d4c4250ab0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:35:05.125843  492025 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/apiserver.key.a4ce95f1 ...
	I1025 10:35:05.125862  492025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/apiserver.key.a4ce95f1: {Name:mk202f9c7fb018cba2d28cc27f3642722fb973c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:35:05.125962  492025 certs.go:382] copying /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/apiserver.crt.a4ce95f1 -> /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/apiserver.crt
	I1025 10:35:05.126042  492025 certs.go:386] copying /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/apiserver.key.a4ce95f1 -> /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/apiserver.key
	I1025 10:35:05.126108  492025 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/proxy-client.key
	I1025 10:35:05.126128  492025 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/proxy-client.crt with IP's: []
	I1025 10:35:05.695343  492025 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/proxy-client.crt ...
	I1025 10:35:05.695374  492025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/proxy-client.crt: {Name:mkbb9261523043a2f102738b401c36b8f899086d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:35:05.695571  492025 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/proxy-client.key ...
	I1025 10:35:05.695586  492025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/proxy-client.key: {Name:mkde20b0e60126f503d64d630c9a321a819b46e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:35:05.695816  492025 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem (1338 bytes)
	W1025 10:35:05.695863  492025 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017_empty.pem, impossibly tiny 0 bytes
	I1025 10:35:05.695878  492025 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 10:35:05.695904  492025 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem (1082 bytes)
	I1025 10:35:05.695933  492025 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:35:05.695958  492025 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem (1675 bytes)
	I1025 10:35:05.696004  492025 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 10:35:05.696572  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:35:05.717349  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:35:05.737952  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:35:05.758044  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:35:05.787772  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 10:35:05.815980  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:35:05.840583  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:35:05.881205  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:35:05.902969  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /usr/share/ca-certificates/2940172.pem (1708 bytes)
	I1025 10:35:05.922291  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:35:05.940648  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem --> /usr/share/ca-certificates/294017.pem (1338 bytes)
	I1025 10:35:05.964014  492025 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:35:05.980156  492025 ssh_runner.go:195] Run: openssl version
	I1025 10:35:05.988570  492025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294017.pem && ln -fs /usr/share/ca-certificates/294017.pem /etc/ssl/certs/294017.pem"
	I1025 10:35:05.997865  492025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294017.pem
	I1025 10:35:06.002885  492025 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:39 /usr/share/ca-certificates/294017.pem
	I1025 10:35:06.002958  492025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294017.pem
	I1025 10:35:06.063082  492025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294017.pem /etc/ssl/certs/51391683.0"
	I1025 10:35:06.073542  492025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940172.pem && ln -fs /usr/share/ca-certificates/2940172.pem /etc/ssl/certs/2940172.pem"
	I1025 10:35:06.083655  492025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940172.pem
	I1025 10:35:06.088288  492025 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:39 /usr/share/ca-certificates/2940172.pem
	I1025 10:35:06.088350  492025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940172.pem
	I1025 10:35:06.143275  492025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940172.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:35:06.163865  492025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:35:06.175043  492025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:35:06.179612  492025 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:35:06.179681  492025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:35:06.227876  492025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:35:06.238334  492025 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:35:06.244028  492025 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 10:35:06.244086  492025 kubeadm.go:400] StartCluster: {Name:no-preload-768303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-768303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:35:06.244168  492025 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:35:06.244252  492025 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:35:06.276732  492025 cri.go:89] found id: ""
	I1025 10:35:06.276807  492025 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:35:06.287599  492025 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 10:35:06.296684  492025 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 10:35:06.296751  492025 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 10:35:06.307626  492025 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 10:35:06.307646  492025 kubeadm.go:157] found existing configuration files:
	
	I1025 10:35:06.307697  492025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 10:35:06.317015  492025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 10:35:06.317075  492025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 10:35:06.325773  492025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 10:35:06.335219  492025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 10:35:06.335283  492025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 10:35:06.347694  492025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 10:35:06.357580  492025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 10:35:06.357635  492025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 10:35:06.366767  492025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 10:35:06.376865  492025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 10:35:06.376942  492025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 10:35:06.385705  492025 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 10:35:06.441318  492025 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 10:35:06.442167  492025 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 10:35:06.469644  492025 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 10:35:06.469726  492025 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1025 10:35:06.469770  492025 kubeadm.go:318] OS: Linux
	I1025 10:35:06.469823  492025 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 10:35:06.469878  492025 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1025 10:35:06.469934  492025 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 10:35:06.469989  492025 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 10:35:06.470048  492025 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 10:35:06.470102  492025 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 10:35:06.470154  492025 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 10:35:06.470208  492025 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 10:35:06.470261  492025 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1025 10:35:06.561045  492025 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 10:35:06.561164  492025 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 10:35:06.561263  492025 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 10:35:06.601706  492025 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> CRI-O <==
	Oct 25 10:34:49 embed-certs-419185 crio[646]: time="2025-10-25T10:34:49.671793402Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=22463ceb-4ef3-40b1-840b-22a53a3dac76 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:34:49 embed-certs-419185 crio[646]: time="2025-10-25T10:34:49.697336001Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=18a19bd2-b5cb-4de0-933d-5c49a22976a0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:34:49 embed-certs-419185 crio[646]: time="2025-10-25T10:34:49.697479963Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:34:49 embed-certs-419185 crio[646]: time="2025-10-25T10:34:49.711588675Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:34:49 embed-certs-419185 crio[646]: time="2025-10-25T10:34:49.711789893Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/f4ac08fb3ee3d3fc0925ae23ca1a2c519efc05e638f2cac6e46f9349b6ff43db/merged/etc/passwd: no such file or directory"
	Oct 25 10:34:49 embed-certs-419185 crio[646]: time="2025-10-25T10:34:49.711833553Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f4ac08fb3ee3d3fc0925ae23ca1a2c519efc05e638f2cac6e46f9349b6ff43db/merged/etc/group: no such file or directory"
	Oct 25 10:34:49 embed-certs-419185 crio[646]: time="2025-10-25T10:34:49.712080728Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:34:49 embed-certs-419185 crio[646]: time="2025-10-25T10:34:49.738821846Z" level=info msg="Created container 2b1f385d3a5d618d9d725f0db4c64e38c43f1084e3fb506564a0e1fd718c2587: kube-system/storage-provisioner/storage-provisioner" id=18a19bd2-b5cb-4de0-933d-5c49a22976a0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:34:49 embed-certs-419185 crio[646]: time="2025-10-25T10:34:49.741790428Z" level=info msg="Starting container: 2b1f385d3a5d618d9d725f0db4c64e38c43f1084e3fb506564a0e1fd718c2587" id=70022a29-2ef0-439d-a02e-34fc37d24ea9 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:34:49 embed-certs-419185 crio[646]: time="2025-10-25T10:34:49.745782461Z" level=info msg="Started container" PID=1639 containerID=2b1f385d3a5d618d9d725f0db4c64e38c43f1084e3fb506564a0e1fd718c2587 description=kube-system/storage-provisioner/storage-provisioner id=70022a29-2ef0-439d-a02e-34fc37d24ea9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ca9b730073e6cf307b031f2d2abcddf87092dc4e021d2b9263922beea38f8299
	Oct 25 10:34:59 embed-certs-419185 crio[646]: time="2025-10-25T10:34:59.383216131Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:34:59 embed-certs-419185 crio[646]: time="2025-10-25T10:34:59.39337263Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:34:59 embed-certs-419185 crio[646]: time="2025-10-25T10:34:59.393581315Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:34:59 embed-certs-419185 crio[646]: time="2025-10-25T10:34:59.393667363Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:34:59 embed-certs-419185 crio[646]: time="2025-10-25T10:34:59.402659577Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:34:59 embed-certs-419185 crio[646]: time="2025-10-25T10:34:59.402848741Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:34:59 embed-certs-419185 crio[646]: time="2025-10-25T10:34:59.402926929Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:34:59 embed-certs-419185 crio[646]: time="2025-10-25T10:34:59.407305897Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:34:59 embed-certs-419185 crio[646]: time="2025-10-25T10:34:59.40749296Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:34:59 embed-certs-419185 crio[646]: time="2025-10-25T10:34:59.407597118Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:34:59 embed-certs-419185 crio[646]: time="2025-10-25T10:34:59.413883966Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:34:59 embed-certs-419185 crio[646]: time="2025-10-25T10:34:59.414112376Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:34:59 embed-certs-419185 crio[646]: time="2025-10-25T10:34:59.414233518Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:34:59 embed-certs-419185 crio[646]: time="2025-10-25T10:34:59.420197226Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:34:59 embed-certs-419185 crio[646]: time="2025-10-25T10:34:59.420578771Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	2b1f385d3a5d6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           19 seconds ago      Running             storage-provisioner         2                   ca9b730073e6c       storage-provisioner                          kube-system
	585aabbe498b8       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago      Exited              dashboard-metrics-scraper   2                   b4dccf8ebd48b       dashboard-metrics-scraper-6ffb444bf9-95f8w   kubernetes-dashboard
	06b3300bd29e9       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   35 seconds ago      Running             kubernetes-dashboard        0                   e799e6eb94430       kubernetes-dashboard-855c9754f9-8v7z6        kubernetes-dashboard
	83e6a282d7b7e       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           50 seconds ago      Running             busybox                     1                   34088c70975f5       busybox                                      default
	24ede3e861b57       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           50 seconds ago      Running             coredns                     1                   8a1bea1efa291       coredns-66bc5c9577-q85rh                     kube-system
	fdd9e1e639fff       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           50 seconds ago      Exited              storage-provisioner         1                   ca9b730073e6c       storage-provisioner                          kube-system
	b9b56386599ed       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           50 seconds ago      Running             kube-proxy                  1                   11d6c4771db65       kube-proxy-2vqfc                             kube-system
	0ae68418f11b6       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           50 seconds ago      Running             kindnet-cni                 1                   3196bdf8ce4e7       kindnet-4ncnd                                kube-system
	fa7fdbde79e11       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           57 seconds ago      Running             kube-scheduler              1                   70a51d816cbe6       kube-scheduler-embed-certs-419185            kube-system
	5d13bdf1233c7       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           57 seconds ago      Running             etcd                        1                   514a99a4fdc52       etcd-embed-certs-419185                      kube-system
	f217878a1e424       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           57 seconds ago      Running             kube-controller-manager     1                   9d0fdd03e4f0d       kube-controller-manager-embed-certs-419185   kube-system
	e175f67ced2de       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           57 seconds ago      Running             kube-apiserver              1                   c6fb22a879842       kube-apiserver-embed-certs-419185            kube-system
	
	
	==> coredns [24ede3e861b571b41dacad659ea362061f94f90095e464ea06917f9e1f4b828b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58194 - 586 "HINFO IN 1047459028914518834.7731629018044210500. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026413338s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-419185
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-419185
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=embed-certs-419185
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_32_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:32:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-419185
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:34:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:34:48 +0000   Sat, 25 Oct 2025 10:32:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:34:48 +0000   Sat, 25 Oct 2025 10:32:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:34:48 +0000   Sat, 25 Oct 2025 10:32:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:34:48 +0000   Sat, 25 Oct 2025 10:33:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-419185
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                ffdb98b4-012c-493a-a464-c37adcde7bd4
	  Boot ID:                    3729fb0b-b441-4078-a4f7-ae0fb40e9fa4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-q85rh                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m17s
	  kube-system                 etcd-embed-certs-419185                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m22s
	  kube-system                 kindnet-4ncnd                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m17s
	  kube-system                 kube-apiserver-embed-certs-419185             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-controller-manager-embed-certs-419185    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-proxy-2vqfc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-scheduler-embed-certs-419185             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-95f8w    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-8v7z6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m15s              kube-proxy       
	  Normal   Starting                 49s                kube-proxy       
	  Normal   NodeHasSufficientPID     2m22s              kubelet          Node embed-certs-419185 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 2m22s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m22s              kubelet          Node embed-certs-419185 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m22s              kubelet          Node embed-certs-419185 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m22s              kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m18s              node-controller  Node embed-certs-419185 event: Registered Node embed-certs-419185 in Controller
	  Normal   NodeReady                95s                kubelet          Node embed-certs-419185 status is now: NodeReady
	  Normal   Starting                 58s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 58s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node embed-certs-419185 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node embed-certs-419185 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     58s (x8 over 58s)  kubelet          Node embed-certs-419185 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           48s                node-controller  Node embed-certs-419185 event: Registered Node embed-certs-419185 in Controller
	
	
	==> dmesg <==
	[Oct25 10:13] overlayfs: idmapped layers are currently not supported
	[Oct25 10:14] overlayfs: idmapped layers are currently not supported
	[Oct25 10:15] overlayfs: idmapped layers are currently not supported
	[ +16.057450] overlayfs: idmapped layers are currently not supported
	[Oct25 10:16] overlayfs: idmapped layers are currently not supported
	[ +24.917476] overlayfs: idmapped layers are currently not supported
	[Oct25 10:17] overlayfs: idmapped layers are currently not supported
	[ +27.290615] overlayfs: idmapped layers are currently not supported
	[Oct25 10:19] overlayfs: idmapped layers are currently not supported
	[Oct25 10:20] overlayfs: idmapped layers are currently not supported
	[Oct25 10:21] overlayfs: idmapped layers are currently not supported
	[Oct25 10:22] overlayfs: idmapped layers are currently not supported
	[ +31.000692] overlayfs: idmapped layers are currently not supported
	[Oct25 10:24] overlayfs: idmapped layers are currently not supported
	[Oct25 10:26] overlayfs: idmapped layers are currently not supported
	[Oct25 10:27] overlayfs: idmapped layers are currently not supported
	[Oct25 10:28] overlayfs: idmapped layers are currently not supported
	[ +15.077774] overlayfs: idmapped layers are currently not supported
	[Oct25 10:29] overlayfs: idmapped layers are currently not supported
	[Oct25 10:30] overlayfs: idmapped layers are currently not supported
	[Oct25 10:32] overlayfs: idmapped layers are currently not supported
	[ +37.840172] overlayfs: idmapped layers are currently not supported
	[Oct25 10:33] overlayfs: idmapped layers are currently not supported
	[Oct25 10:34] overlayfs: idmapped layers are currently not supported
	[ +38.146630] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5d13bdf1233c74d26d6840451eeed0128e78110075227087c41d9d2ef0a3b0c1] <==
	{"level":"warn","ts":"2025-10-25T10:34:15.480921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:15.521048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:15.550550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:15.573582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:15.601599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:15.622785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:15.658547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:15.686018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:15.709612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:15.740673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:15.784583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:15.810089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:15.841602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:15.865694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:15.924761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:15.958281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:15.982363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:16.016517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:16.040176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:16.083285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:16.178413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:16.215064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:16.238171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:16.275951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:16.366411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32902","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:35:09 up  2:17,  0 user,  load average: 2.89, 3.44, 3.11
	Linux embed-certs-419185 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0ae68418f11b666da7da5e8a9533b93c71476592288f12ee5e2240252976f3a9] <==
	I1025 10:34:19.180524       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:34:19.181734       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1025 10:34:19.181952       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:34:19.182562       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:34:19.182628       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:34:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:34:19.376608       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:34:19.376683       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:34:19.376715       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:34:19.377525       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 10:34:49.376914       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 10:34:49.377238       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 10:34:49.377374       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 10:34:49.377576       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1025 10:34:50.680842       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:34:50.680966       1 metrics.go:72] Registering metrics
	I1025 10:34:50.681048       1 controller.go:711] "Syncing nftables rules"
	I1025 10:34:59.382146       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 10:34:59.382274       1 main.go:301] handling current node
	I1025 10:35:09.384457       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 10:35:09.384492       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e175f67ced2debe1beebce72628c6856d49efdd71fbee71a4a521e5cb4728c33] <==
	I1025 10:34:17.812396       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 10:34:17.812473       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 10:34:17.813874       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 10:34:17.817460       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:34:17.827782       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1025 10:34:17.827851       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 10:34:17.828033       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 10:34:17.828082       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 10:34:17.842355       1 aggregator.go:171] initial CRD sync complete...
	I1025 10:34:17.842449       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 10:34:17.842479       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 10:34:17.842508       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:34:17.843560       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1025 10:34:17.884337       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 10:34:18.415634       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:34:18.454311       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:34:18.766264       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 10:34:19.052194       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:34:19.180373       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:34:19.237528       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:34:19.579202       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.144.104"}
	I1025 10:34:19.681668       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.101.66"}
	I1025 10:34:22.231250       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:34:22.331327       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:34:22.386073       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [f217878a1e424333492789b8a51f60ae7e258ef0746c75ef438b3edd64069f81] <==
	I1025 10:34:21.942201       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 10:34:21.944795       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1025 10:34:21.945875       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 10:34:21.948009       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 10:34:21.949167       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 10:34:21.950430       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1025 10:34:21.952592       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 10:34:21.952603       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 10:34:21.954697       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 10:34:21.955990       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 10:34:21.958269       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 10:34:21.958279       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 10:34:21.958671       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 10:34:21.959448       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 10:34:21.960588       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 10:34:21.961758       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 10:34:21.962496       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:34:21.964564       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 10:34:21.974224       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 10:34:21.974333       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 10:34:21.974424       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:34:21.974441       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 10:34:21.974448       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 10:34:21.974231       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 10:34:21.975331       1 shared_informer.go:356] "Caches are synced" controller="expand"
	
	
	==> kube-proxy [b9b56386599ed53148fe4edb01fdb3a09ac28c031475b6f0f910103b06e5915e] <==
	I1025 10:34:19.718909       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:34:19.865628       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:34:19.968392       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:34:19.968438       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1025 10:34:19.968613       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:34:19.989559       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:34:19.989985       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:34:19.997785       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:34:19.998122       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:34:19.998147       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:34:19.999673       1 config.go:200] "Starting service config controller"
	I1025 10:34:19.999696       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:34:19.999715       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:34:19.999719       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:34:19.999731       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:34:19.999736       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:34:20.000661       1 config.go:309] "Starting node config controller"
	I1025 10:34:20.000676       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:34:20.000683       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:34:20.100251       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 10:34:20.100265       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 10:34:20.100321       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [fa7fdbde79e116585ff7bd6892d6145e4f4dbd9d48734b75cf7c4527c5f3dd33] <==
	I1025 10:34:17.339840       1 serving.go:386] Generated self-signed cert in-memory
	I1025 10:34:19.737273       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 10:34:19.737321       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:34:19.752250       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1025 10:34:19.752369       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1025 10:34:19.752450       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:34:19.752482       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:34:19.752546       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:34:19.752608       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:34:19.753934       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 10:34:19.754102       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 10:34:19.853921       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:34:19.854000       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1025 10:34:19.854087       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:34:22 embed-certs-419185 kubelet[771]: I1025 10:34:22.587762     771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g7cx\" (UniqueName: \"kubernetes.io/projected/0c078832-35bc-42be-83c1-88cc29206272-kube-api-access-6g7cx\") pod \"kubernetes-dashboard-855c9754f9-8v7z6\" (UID: \"0c078832-35bc-42be-83c1-88cc29206272\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8v7z6"
	Oct 25 10:34:22 embed-certs-419185 kubelet[771]: I1025 10:34:22.587823     771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0c078832-35bc-42be-83c1-88cc29206272-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-8v7z6\" (UID: \"0c078832-35bc-42be-83c1-88cc29206272\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8v7z6"
	Oct 25 10:34:22 embed-certs-419185 kubelet[771]: I1025 10:34:22.587851     771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqhxm\" (UniqueName: \"kubernetes.io/projected/6d7d645d-d5e4-47c4-8831-c9a897f1d28d-kube-api-access-jqhxm\") pod \"dashboard-metrics-scraper-6ffb444bf9-95f8w\" (UID: \"6d7d645d-d5e4-47c4-8831-c9a897f1d28d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-95f8w"
	Oct 25 10:34:22 embed-certs-419185 kubelet[771]: I1025 10:34:22.587870     771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6d7d645d-d5e4-47c4-8831-c9a897f1d28d-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-95f8w\" (UID: \"6d7d645d-d5e4-47c4-8831-c9a897f1d28d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-95f8w"
	Oct 25 10:34:22 embed-certs-419185 kubelet[771]: W1025 10:34:22.854319     771 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/1fda185b5ef1eb2faf4fa928e32967ade3a0a627d5653f4c7f4c57d474b9fefa/crio-b4dccf8ebd48ba9982bc6370c345a8d023a1dca52219da7164aac416551026db WatchSource:0}: Error finding container b4dccf8ebd48ba9982bc6370c345a8d023a1dca52219da7164aac416551026db: Status 404 returned error can't find the container with id b4dccf8ebd48ba9982bc6370c345a8d023a1dca52219da7164aac416551026db
	Oct 25 10:34:22 embed-certs-419185 kubelet[771]: W1025 10:34:22.867872     771 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/1fda185b5ef1eb2faf4fa928e32967ade3a0a627d5653f4c7f4c57d474b9fefa/crio-e799e6eb944307a472ff228ccc90ff35cfd2feffaed5622ae85927a6bf706567 WatchSource:0}: Error finding container e799e6eb944307a472ff228ccc90ff35cfd2feffaed5622ae85927a6bf706567: Status 404 returned error can't find the container with id e799e6eb944307a472ff228ccc90ff35cfd2feffaed5622ae85927a6bf706567
	Oct 25 10:34:27 embed-certs-419185 kubelet[771]: I1025 10:34:27.603883     771 scope.go:117] "RemoveContainer" containerID="15357ef390b42a71470038909b2154b97e46edaaf0eb03502ed5f267c47949ab"
	Oct 25 10:34:28 embed-certs-419185 kubelet[771]: I1025 10:34:28.607293     771 scope.go:117] "RemoveContainer" containerID="3b1ec72f591c13c96e485ef4a123a7fc92f346c9313acd3e4ef05b4a09018a1d"
	Oct 25 10:34:28 embed-certs-419185 kubelet[771]: E1025 10:34:28.607437     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-95f8w_kubernetes-dashboard(6d7d645d-d5e4-47c4-8831-c9a897f1d28d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-95f8w" podUID="6d7d645d-d5e4-47c4-8831-c9a897f1d28d"
	Oct 25 10:34:28 embed-certs-419185 kubelet[771]: I1025 10:34:28.610634     771 scope.go:117] "RemoveContainer" containerID="15357ef390b42a71470038909b2154b97e46edaaf0eb03502ed5f267c47949ab"
	Oct 25 10:34:29 embed-certs-419185 kubelet[771]: I1025 10:34:29.611686     771 scope.go:117] "RemoveContainer" containerID="3b1ec72f591c13c96e485ef4a123a7fc92f346c9313acd3e4ef05b4a09018a1d"
	Oct 25 10:34:29 embed-certs-419185 kubelet[771]: E1025 10:34:29.611834     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-95f8w_kubernetes-dashboard(6d7d645d-d5e4-47c4-8831-c9a897f1d28d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-95f8w" podUID="6d7d645d-d5e4-47c4-8831-c9a897f1d28d"
	Oct 25 10:34:32 embed-certs-419185 kubelet[771]: I1025 10:34:32.812392     771 scope.go:117] "RemoveContainer" containerID="3b1ec72f591c13c96e485ef4a123a7fc92f346c9313acd3e4ef05b4a09018a1d"
	Oct 25 10:34:32 embed-certs-419185 kubelet[771]: E1025 10:34:32.812603     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-95f8w_kubernetes-dashboard(6d7d645d-d5e4-47c4-8831-c9a897f1d28d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-95f8w" podUID="6d7d645d-d5e4-47c4-8831-c9a897f1d28d"
	Oct 25 10:34:47 embed-certs-419185 kubelet[771]: I1025 10:34:47.485803     771 scope.go:117] "RemoveContainer" containerID="3b1ec72f591c13c96e485ef4a123a7fc92f346c9313acd3e4ef05b4a09018a1d"
	Oct 25 10:34:47 embed-certs-419185 kubelet[771]: I1025 10:34:47.659609     771 scope.go:117] "RemoveContainer" containerID="3b1ec72f591c13c96e485ef4a123a7fc92f346c9313acd3e4ef05b4a09018a1d"
	Oct 25 10:34:47 embed-certs-419185 kubelet[771]: I1025 10:34:47.659901     771 scope.go:117] "RemoveContainer" containerID="585aabbe498b8bf4b668c8c4429fb36aefd9683afaa99faabe55a1b9e126f4c9"
	Oct 25 10:34:47 embed-certs-419185 kubelet[771]: E1025 10:34:47.660048     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-95f8w_kubernetes-dashboard(6d7d645d-d5e4-47c4-8831-c9a897f1d28d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-95f8w" podUID="6d7d645d-d5e4-47c4-8831-c9a897f1d28d"
	Oct 25 10:34:47 embed-certs-419185 kubelet[771]: I1025 10:34:47.695227     771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8v7z6" podStartSLOduration=14.783778823 podStartE2EDuration="25.693796401s" podCreationTimestamp="2025-10-25 10:34:22 +0000 UTC" firstStartedPulling="2025-10-25 10:34:22.870735101 +0000 UTC m=+11.611498581" lastFinishedPulling="2025-10-25 10:34:33.780752687 +0000 UTC m=+22.521516159" observedRunningTime="2025-10-25 10:34:34.654311037 +0000 UTC m=+23.395074517" watchObservedRunningTime="2025-10-25 10:34:47.693796401 +0000 UTC m=+36.434559873"
	Oct 25 10:34:49 embed-certs-419185 kubelet[771]: I1025 10:34:49.668555     771 scope.go:117] "RemoveContainer" containerID="fdd9e1e639fff0a8eae4c6115f0fe7c18321833449082075f7eab0ca237869cc"
	Oct 25 10:34:52 embed-certs-419185 kubelet[771]: I1025 10:34:52.812103     771 scope.go:117] "RemoveContainer" containerID="585aabbe498b8bf4b668c8c4429fb36aefd9683afaa99faabe55a1b9e126f4c9"
	Oct 25 10:34:52 embed-certs-419185 kubelet[771]: E1025 10:34:52.813047     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-95f8w_kubernetes-dashboard(6d7d645d-d5e4-47c4-8831-c9a897f1d28d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-95f8w" podUID="6d7d645d-d5e4-47c4-8831-c9a897f1d28d"
	Oct 25 10:35:06 embed-certs-419185 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 10:35:06 embed-certs-419185 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 10:35:06 embed-certs-419185 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [06b3300bd29e97068f4dd4ed1769a529ce119164f2e4915858c3b1bcd3c78d18] <==
	2025/10/25 10:34:33 Using namespace: kubernetes-dashboard
	2025/10/25 10:34:33 Using in-cluster config to connect to apiserver
	2025/10/25 10:34:33 Using secret token for csrf signing
	2025/10/25 10:34:33 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 10:34:33 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 10:34:33 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 10:34:33 Generating JWE encryption key
	2025/10/25 10:34:33 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 10:34:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 10:34:34 Initializing JWE encryption key from synchronized object
	2025/10/25 10:34:34 Creating in-cluster Sidecar client
	2025/10/25 10:34:34 Serving insecurely on HTTP port: 9090
	2025/10/25 10:34:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:35:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:34:33 Starting overwatch
	
	
	==> storage-provisioner [2b1f385d3a5d618d9d725f0db4c64e38c43f1084e3fb506564a0e1fd718c2587] <==
	I1025 10:34:49.764754       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 10:34:49.791463       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 10:34:49.791577       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 10:34:49.794433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:34:53.250799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:34:57.511888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:35:01.112903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:35:04.169767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:35:07.192345       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:35:07.198081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 10:35:07.198227       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 10:35:07.198390       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-419185_8352b05a-21fd-46fc-95ec-7cf74f09f705!
	I1025 10:35:07.198441       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"24dcc85a-2e1b-4115-b38c-8d923951b052", APIVersion:"v1", ResourceVersion:"650", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-419185_8352b05a-21fd-46fc-95ec-7cf74f09f705 became leader
	W1025 10:35:07.211730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:35:07.229142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 10:35:07.302050       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-419185_8352b05a-21fd-46fc-95ec-7cf74f09f705!
	W1025 10:35:09.241796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:35:09.268972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fdd9e1e639fff0a8eae4c6115f0fe7c18321833449082075f7eab0ca237869cc] <==
	I1025 10:34:19.333931       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 10:34:49.337715       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-419185 -n embed-certs-419185
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-419185 -n embed-certs-419185: exit status 2 (486.571155ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-419185 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-419185
helpers_test.go:243: (dbg) docker inspect embed-certs-419185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1fda185b5ef1eb2faf4fa928e32967ade3a0a627d5653f4c7f4c57d474b9fefa",
	        "Created": "2025-10-25T10:32:21.18342263Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 488559,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:34:04.320676763Z",
	            "FinishedAt": "2025-10-25T10:34:03.248790349Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/1fda185b5ef1eb2faf4fa928e32967ade3a0a627d5653f4c7f4c57d474b9fefa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1fda185b5ef1eb2faf4fa928e32967ade3a0a627d5653f4c7f4c57d474b9fefa/hostname",
	        "HostsPath": "/var/lib/docker/containers/1fda185b5ef1eb2faf4fa928e32967ade3a0a627d5653f4c7f4c57d474b9fefa/hosts",
	        "LogPath": "/var/lib/docker/containers/1fda185b5ef1eb2faf4fa928e32967ade3a0a627d5653f4c7f4c57d474b9fefa/1fda185b5ef1eb2faf4fa928e32967ade3a0a627d5653f4c7f4c57d474b9fefa-json.log",
	        "Name": "/embed-certs-419185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-419185:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-419185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1fda185b5ef1eb2faf4fa928e32967ade3a0a627d5653f4c7f4c57d474b9fefa",
	                "LowerDir": "/var/lib/docker/overlay2/911a81a7ff1c2790f47bc97a79eabe5d1dbf9493b6cc35e93a7a32d7866a7cdf-init/diff:/var/lib/docker/overlay2/56244b4f60319ba406f5ebe406da3f45bd5764c167111669ae2d84794cf94518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/911a81a7ff1c2790f47bc97a79eabe5d1dbf9493b6cc35e93a7a32d7866a7cdf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/911a81a7ff1c2790f47bc97a79eabe5d1dbf9493b6cc35e93a7a32d7866a7cdf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/911a81a7ff1c2790f47bc97a79eabe5d1dbf9493b6cc35e93a7a32d7866a7cdf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-419185",
	                "Source": "/var/lib/docker/volumes/embed-certs-419185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-419185",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-419185",
	                "name.minikube.sigs.k8s.io": "embed-certs-419185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9426b7a1179cac1aec836f59e9cc27f57719da01b70042b18b2642e2cd3edcda",
	            "SandboxKey": "/var/run/docker/netns/9426b7a1179c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-419185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:16:c2:c4:45:c7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2813ac098a0563027aa465aa29bfe18ee37b22086f641503f6265d21106417e7",
	                    "EndpointID": "1fce494b5e3a7fe3b240b0382b47a867470290fa0063b785ce7d1eeef2cb662a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-419185",
	                        "1fda185b5ef1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-419185 -n embed-certs-419185
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-419185 -n embed-certs-419185: exit status 2 (447.055462ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-419185 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-419185 logs -n 25: (1.57233533s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-610853 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:30 UTC │ 25 Oct 25 10:31 UTC │
	│ image   │ old-k8s-version-610853 image list --format=json                                                                                                                                                                                               │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │ 25 Oct 25 10:31 UTC │
	│ pause   │ -p old-k8s-version-610853 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │                     │
	│ delete  │ -p old-k8s-version-610853                                                                                                                                                                                                                     │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │ 25 Oct 25 10:31 UTC │
	│ delete  │ -p old-k8s-version-610853                                                                                                                                                                                                                     │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │ 25 Oct 25 10:31 UTC │
	│ start   │ -p default-k8s-diff-port-204074 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │ 25 Oct 25 10:33 UTC │
	│ start   │ -p cert-expiration-313068 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-313068       │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │ 25 Oct 25 10:32 UTC │
	│ delete  │ -p cert-expiration-313068                                                                                                                                                                                                                     │ cert-expiration-313068       │ jenkins │ v1.37.0 │ 25 Oct 25 10:32 UTC │ 25 Oct 25 10:32 UTC │
	│ start   │ -p embed-certs-419185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:32 UTC │ 25 Oct 25 10:33 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-204074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-204074 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │ 25 Oct 25 10:33 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-204074 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │ 25 Oct 25 10:33 UTC │
	│ start   │ -p default-k8s-diff-port-204074 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │ 25 Oct 25 10:34 UTC │
	│ addons  │ enable metrics-server -p embed-certs-419185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │                     │
	│ stop    │ -p embed-certs-419185 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │ 25 Oct 25 10:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-419185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ start   │ -p embed-certs-419185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ image   │ default-k8s-diff-port-204074 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ pause   │ -p default-k8s-diff-port-204074 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-204074                                                                                                                                                                                                               │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ delete  │ -p default-k8s-diff-port-204074                                                                                                                                                                                                               │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ delete  │ -p disable-driver-mounts-533631                                                                                                                                                                                                               │ disable-driver-mounts-533631 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ start   │ -p no-preload-768303 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-768303            │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │                     │
	│ image   │ embed-certs-419185 image list --format=json                                                                                                                                                                                                   │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │ 25 Oct 25 10:35 UTC │
	│ pause   │ -p embed-certs-419185 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:34:39
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:34:39.748518  492025 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:34:39.748728  492025 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:34:39.748757  492025 out.go:374] Setting ErrFile to fd 2...
	I1025 10:34:39.748777  492025 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:34:39.749140  492025 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 10:34:39.749639  492025 out.go:368] Setting JSON to false
	I1025 10:34:39.750883  492025 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8230,"bootTime":1761380250,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1025 10:34:39.750994  492025 start.go:141] virtualization:  
	I1025 10:34:39.756991  492025 out.go:179] * [no-preload-768303] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:34:39.761733  492025 notify.go:220] Checking for updates...
	I1025 10:34:39.762257  492025 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 10:34:39.765831  492025 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:34:39.769112  492025 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:34:39.772300  492025 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-292167/.minikube
	I1025 10:34:39.775442  492025 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:34:39.778456  492025 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:34:39.782142  492025 config.go:182] Loaded profile config "embed-certs-419185": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:34:39.782273  492025 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:34:39.814850  492025 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:34:39.815043  492025 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:34:39.880326  492025 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:34:39.868431492 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:34:39.880469  492025 docker.go:318] overlay module found
	I1025 10:34:39.883622  492025 out.go:179] * Using the docker driver based on user configuration
	I1025 10:34:39.886598  492025 start.go:305] selected driver: docker
	I1025 10:34:39.886634  492025 start.go:925] validating driver "docker" against <nil>
	I1025 10:34:39.886650  492025 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:34:39.887668  492025 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:34:39.949856  492025 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:34:39.939947941 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:34:39.950048  492025 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 10:34:39.950300  492025 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:34:39.954196  492025 out.go:179] * Using Docker driver with root privileges
	I1025 10:34:39.957149  492025 cni.go:84] Creating CNI manager for ""
	I1025 10:34:39.957221  492025 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:34:39.957229  492025 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 10:34:39.957325  492025 start.go:349] cluster config:
	{Name:no-preload-768303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-768303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:34:39.962724  492025 out.go:179] * Starting "no-preload-768303" primary control-plane node in "no-preload-768303" cluster
	I1025 10:34:39.965636  492025 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:34:39.968478  492025 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:34:39.971461  492025 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:34:39.971574  492025 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:34:39.971606  492025 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/config.json ...
	I1025 10:34:39.971641  492025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/config.json: {Name:mka01f7e8a098c7a1beb738ad84816d292c28e90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:34:39.971963  492025 cache.go:107] acquiring lock: {Name:mkcb674bf6bbc265e760bf8be116a57186608a58 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:34:39.972020  492025 cache.go:115] /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1025 10:34:39.972027  492025 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 72.321µs
	I1025 10:34:39.972035  492025 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1025 10:34:39.972045  492025 cache.go:107] acquiring lock: {Name:mk9facf4e59193f96d96012cf82ef7fef364093d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:34:39.972113  492025 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1025 10:34:39.972419  492025 cache.go:107] acquiring lock: {Name:mk1e264701efd819526cb1327aac37ba6383079c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:34:39.972555  492025 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 10:34:39.972799  492025 cache.go:107] acquiring lock: {Name:mk145e03dafbcb30f74a27f99b5fba1addf06371 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:34:39.972953  492025 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1025 10:34:39.973162  492025 cache.go:107] acquiring lock: {Name:mkb1799d37a5611969ac9809065db3c631238657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:34:39.973298  492025 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1025 10:34:39.973466  492025 cache.go:107] acquiring lock: {Name:mkd43195497e2780982a3de630a4cda8f1c812f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:34:39.973609  492025 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1025 10:34:39.973850  492025 cache.go:107] acquiring lock: {Name:mk92a2a5fb8dde9e51922a55162996cccaaf10a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:34:39.973961  492025 cache.go:107] acquiring lock: {Name:mk2866f59a9236262f732426434fc9bafb724b61 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:34:39.974002  492025 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1025 10:34:39.974100  492025 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1025 10:34:39.975457  492025 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1025 10:34:39.975953  492025 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1025 10:34:39.977030  492025 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 10:34:39.977281  492025 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1025 10:34:39.978197  492025 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1025 10:34:39.978773  492025 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1025 10:34:39.979281  492025 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1025 10:34:40.060512  492025 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:34:40.060612  492025 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:34:40.060663  492025 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:34:40.060696  492025 start.go:360] acquireMachinesLock for no-preload-768303: {Name:mkf575e11dd83318b723f79e28f313be28102c7c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:34:40.060916  492025 start.go:364] duration metric: took 164.441µs to acquireMachinesLock for "no-preload-768303"
	I1025 10:34:40.061019  492025 start.go:93] Provisioning new machine with config: &{Name:no-preload-768303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-768303 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:34:40.061147  492025 start.go:125] createHost starting for "" (driver="docker")
	W1025 10:34:41.265633  488429 pod_ready.go:104] pod "coredns-66bc5c9577-q85rh" is not "Ready", error: <nil>
	W1025 10:34:43.762675  488429 pod_ready.go:104] pod "coredns-66bc5c9577-q85rh" is not "Ready", error: <nil>
	I1025 10:34:40.065097  492025 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 10:34:40.065430  492025 start.go:159] libmachine.API.Create for "no-preload-768303" (driver="docker")
	I1025 10:34:40.065486  492025 client.go:168] LocalClient.Create starting
	I1025 10:34:40.065570  492025 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem
	I1025 10:34:40.065622  492025 main.go:141] libmachine: Decoding PEM data...
	I1025 10:34:40.065642  492025 main.go:141] libmachine: Parsing certificate...
	I1025 10:34:40.065713  492025 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem
	I1025 10:34:40.065739  492025 main.go:141] libmachine: Decoding PEM data...
	I1025 10:34:40.065761  492025 main.go:141] libmachine: Parsing certificate...
	I1025 10:34:40.066185  492025 cli_runner.go:164] Run: docker network inspect no-preload-768303 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 10:34:40.087117  492025 cli_runner.go:211] docker network inspect no-preload-768303 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 10:34:40.087259  492025 network_create.go:284] running [docker network inspect no-preload-768303] to gather additional debugging logs...
	I1025 10:34:40.087335  492025 cli_runner.go:164] Run: docker network inspect no-preload-768303
	W1025 10:34:40.108897  492025 cli_runner.go:211] docker network inspect no-preload-768303 returned with exit code 1
	I1025 10:34:40.108938  492025 network_create.go:287] error running [docker network inspect no-preload-768303]: docker network inspect no-preload-768303: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-768303 not found
	I1025 10:34:40.108953  492025 network_create.go:289] output of [docker network inspect no-preload-768303]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-768303 not found
	
	** /stderr **
	I1025 10:34:40.109075  492025 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:34:40.136206  492025 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-101b69e1e09b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d2:49:dd:18:ab:21} reservation:<nil>}
	I1025 10:34:40.136618  492025 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fee160f0176f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:62:3b:42:f8:f5:b2} reservation:<nil>}
	I1025 10:34:40.137004  492025 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5368f13f34ad IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2e:30:83:3d:80:30} reservation:<nil>}
	I1025 10:34:40.137388  492025 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-2813ac098a05 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:16:11:56:2a:1e:79} reservation:<nil>}
	I1025 10:34:40.137905  492025 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001e46090}
	I1025 10:34:40.137961  492025 network_create.go:124] attempt to create docker network no-preload-768303 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1025 10:34:40.138063  492025 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-768303 no-preload-768303
	I1025 10:34:40.218391  492025 network_create.go:108] docker network no-preload-768303 192.168.85.0/24 created
	I1025 10:34:40.218425  492025 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-768303" container
	I1025 10:34:40.218501  492025 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 10:34:40.236546  492025 cli_runner.go:164] Run: docker volume create no-preload-768303 --label name.minikube.sigs.k8s.io=no-preload-768303 --label created_by.minikube.sigs.k8s.io=true
	I1025 10:34:40.272283  492025 oci.go:103] Successfully created a docker volume no-preload-768303
	I1025 10:34:40.272430  492025 cli_runner.go:164] Run: docker run --rm --name no-preload-768303-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-768303 --entrypoint /usr/bin/test -v no-preload-768303:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 10:34:40.294054  492025 cache.go:162] opening:  /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1025 10:34:40.297218  492025 cache.go:162] opening:  /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1025 10:34:40.299571  492025 cache.go:162] opening:  /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1025 10:34:40.303790  492025 cache.go:162] opening:  /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1025 10:34:40.310716  492025 cache.go:162] opening:  /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1025 10:34:40.310944  492025 cache.go:162] opening:  /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1025 10:34:40.320755  492025 cache.go:162] opening:  /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1025 10:34:40.351444  492025 cache.go:157] /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1025 10:34:40.351470  492025 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 378.00723ms
	I1025 10:34:40.351481  492025 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1025 10:34:40.583018  492025 cache.go:157] /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1025 10:34:40.583104  492025 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 609.946623ms
	I1025 10:34:40.583194  492025 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1025 10:34:40.896768  492025 oci.go:107] Successfully prepared a docker volume no-preload-768303
	I1025 10:34:40.896804  492025 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1025 10:34:40.896950  492025 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1025 10:34:40.897056  492025 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 10:34:41.006669  492025 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-768303 --name no-preload-768303 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-768303 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-768303 --network no-preload-768303 --ip 192.168.85.2 --volume no-preload-768303:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 10:34:41.165506  492025 cache.go:157] /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1025 10:34:41.165762  492025 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.192962944s
	I1025 10:34:41.165859  492025 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1025 10:34:41.184285  492025 cache.go:157] /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1025 10:34:41.192769  492025 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.218804258s
	I1025 10:34:41.192794  492025 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1025 10:34:41.292076  492025 cache.go:157] /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1025 10:34:41.292104  492025 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.319691046s
	I1025 10:34:41.292123  492025 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1025 10:34:41.294156  492025 cache.go:157] /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1025 10:34:41.294192  492025 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.322145544s
	I1025 10:34:41.294203  492025 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1025 10:34:41.439265  492025 cli_runner.go:164] Run: docker container inspect no-preload-768303 --format={{.State.Running}}
	I1025 10:34:41.474967  492025 cli_runner.go:164] Run: docker container inspect no-preload-768303 --format={{.State.Status}}
	I1025 10:34:41.516378  492025 cli_runner.go:164] Run: docker exec no-preload-768303 stat /var/lib/dpkg/alternatives/iptables
	I1025 10:34:41.597814  492025 oci.go:144] the created container "no-preload-768303" has a running status.
	I1025 10:34:41.597845  492025 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21794-292167/.minikube/machines/no-preload-768303/id_rsa...
	I1025 10:34:41.983963  492025 cache.go:157] /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1025 10:34:41.983999  492025 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.01015107s
	I1025 10:34:41.984011  492025 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1025 10:34:41.984022  492025 cache.go:87] Successfully saved all images to host disk.
	I1025 10:34:42.389856  492025 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21794-292167/.minikube/machines/no-preload-768303/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 10:34:42.415283  492025 cli_runner.go:164] Run: docker container inspect no-preload-768303 --format={{.State.Status}}
	I1025 10:34:42.434990  492025 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 10:34:42.435012  492025 kic_runner.go:114] Args: [docker exec --privileged no-preload-768303 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 10:34:42.479288  492025 cli_runner.go:164] Run: docker container inspect no-preload-768303 --format={{.State.Status}}
	I1025 10:34:42.498195  492025 machine.go:93] provisionDockerMachine start ...
	I1025 10:34:42.498295  492025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:34:42.517039  492025 main.go:141] libmachine: Using SSH client type: native
	I1025 10:34:42.517540  492025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33452 <nil> <nil>}
	I1025 10:34:42.517557  492025 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:34:42.518499  492025 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58604->127.0.0.1:33452: read: connection reset by peer
	I1025 10:34:45.670998  492025 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-768303
	
	I1025 10:34:45.671023  492025 ubuntu.go:182] provisioning hostname "no-preload-768303"
	I1025 10:34:45.671132  492025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:34:45.689701  492025 main.go:141] libmachine: Using SSH client type: native
	I1025 10:34:45.690016  492025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33452 <nil> <nil>}
	I1025 10:34:45.690033  492025 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-768303 && echo "no-preload-768303" | sudo tee /etc/hostname
	I1025 10:34:45.850805  492025 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-768303
	
	I1025 10:34:45.850888  492025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:34:45.868758  492025 main.go:141] libmachine: Using SSH client type: native
	I1025 10:34:45.869075  492025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33452 <nil> <nil>}
	I1025 10:34:45.869099  492025 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-768303' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-768303/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-768303' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:34:46.023593  492025 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:34:46.023618  492025 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-292167/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-292167/.minikube}
	I1025 10:34:46.023636  492025 ubuntu.go:190] setting up certificates
	I1025 10:34:46.023645  492025 provision.go:84] configureAuth start
	I1025 10:34:46.023721  492025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-768303
	I1025 10:34:46.042373  492025 provision.go:143] copyHostCerts
	I1025 10:34:46.042449  492025 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem, removing ...
	I1025 10:34:46.042468  492025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem
	I1025 10:34:46.042552  492025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem (1675 bytes)
	I1025 10:34:46.042646  492025 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem, removing ...
	I1025 10:34:46.042658  492025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem
	I1025 10:34:46.042695  492025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem (1082 bytes)
	I1025 10:34:46.042761  492025 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem, removing ...
	I1025 10:34:46.042773  492025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem
	I1025 10:34:46.042800  492025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem (1123 bytes)
	I1025 10:34:46.042853  492025 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem org=jenkins.no-preload-768303 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-768303]
	I1025 10:34:46.313446  492025 provision.go:177] copyRemoteCerts
	I1025 10:34:46.313518  492025 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:34:46.313557  492025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:34:46.332035  492025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33452 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/no-preload-768303/id_rsa Username:docker}
	I1025 10:34:46.443627  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 10:34:46.462007  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 10:34:46.480054  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 10:34:46.498378  492025 provision.go:87] duration metric: took 474.707967ms to configureAuth
	I1025 10:34:46.498407  492025 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:34:46.498600  492025 config.go:182] Loaded profile config "no-preload-768303": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:34:46.498716  492025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:34:46.517287  492025 main.go:141] libmachine: Using SSH client type: native
	I1025 10:34:46.517634  492025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33452 <nil> <nil>}
	I1025 10:34:46.517658  492025 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:34:46.868055  492025 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:34:46.868076  492025 machine.go:96] duration metric: took 4.369857006s to provisionDockerMachine
	I1025 10:34:46.868085  492025 client.go:171] duration metric: took 6.802592461s to LocalClient.Create
	I1025 10:34:46.868096  492025 start.go:167] duration metric: took 6.802667753s to libmachine.API.Create "no-preload-768303"
	I1025 10:34:46.868103  492025 start.go:293] postStartSetup for "no-preload-768303" (driver="docker")
	I1025 10:34:46.868114  492025 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:34:46.868196  492025 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:34:46.868241  492025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:34:46.886858  492025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33452 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/no-preload-768303/id_rsa Username:docker}
	I1025 10:34:46.991335  492025 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:34:46.994663  492025 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:34:46.994689  492025 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:34:46.994700  492025 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/addons for local assets ...
	I1025 10:34:46.994763  492025 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/files for local assets ...
	I1025 10:34:46.994849  492025 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem -> 2940172.pem in /etc/ssl/certs
	I1025 10:34:46.994954  492025 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:34:47.004029  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 10:34:47.022582  492025 start.go:296] duration metric: took 154.465638ms for postStartSetup
	I1025 10:34:47.022951  492025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-768303
	I1025 10:34:47.043336  492025 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/config.json ...
	I1025 10:34:47.043635  492025 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:34:47.043683  492025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:34:47.062302  492025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33452 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/no-preload-768303/id_rsa Username:docker}
	I1025 10:34:47.164476  492025 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:34:47.169585  492025 start.go:128] duration metric: took 7.108421136s to createHost
	I1025 10:34:47.169609  492025 start.go:83] releasing machines lock for "no-preload-768303", held for 7.108677764s
	I1025 10:34:47.169684  492025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-768303
	I1025 10:34:47.187523  492025 ssh_runner.go:195] Run: cat /version.json
	I1025 10:34:47.187575  492025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:34:47.187606  492025 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:34:47.187670  492025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:34:47.212421  492025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33452 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/no-preload-768303/id_rsa Username:docker}
	I1025 10:34:47.229341  492025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33452 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/no-preload-768303/id_rsa Username:docker}
	I1025 10:34:47.319678  492025 ssh_runner.go:195] Run: systemctl --version
	I1025 10:34:47.425704  492025 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:34:47.463394  492025 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:34:47.467900  492025 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:34:47.467971  492025 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:34:47.519836  492025 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1025 10:34:47.519860  492025 start.go:495] detecting cgroup driver to use...
	I1025 10:34:47.519894  492025 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:34:47.519948  492025 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:34:47.546644  492025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:34:47.563402  492025 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:34:47.563470  492025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:34:47.582328  492025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:34:47.602471  492025 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:34:47.755248  492025 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:34:47.884960  492025 docker.go:234] disabling docker service ...
	I1025 10:34:47.885081  492025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:34:47.908849  492025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:34:47.922417  492025 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:34:48.049128  492025 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:34:48.175579  492025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:34:48.188846  492025 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:34:48.202716  492025 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:34:48.202783  492025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:34:48.211649  492025 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:34:48.211765  492025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:34:48.220912  492025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:34:48.230323  492025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:34:48.239739  492025 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:34:48.248328  492025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:34:48.259780  492025 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:34:48.275347  492025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:34:48.285571  492025 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:34:48.294033  492025 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:34:48.301710  492025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:34:48.421542  492025 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:34:48.570574  492025 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:34:48.570658  492025 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:34:48.574785  492025 start.go:563] Will wait 60s for crictl version
	I1025 10:34:48.574853  492025 ssh_runner.go:195] Run: which crictl
	I1025 10:34:48.578566  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:34:48.603946  492025 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:34:48.604039  492025 ssh_runner.go:195] Run: crio --version
	I1025 10:34:48.638554  492025 ssh_runner.go:195] Run: crio --version
	I1025 10:34:48.677488  492025 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1025 10:34:46.263726  488429 pod_ready.go:104] pod "coredns-66bc5c9577-q85rh" is not "Ready", error: <nil>
	W1025 10:34:48.264262  488429 pod_ready.go:104] pod "coredns-66bc5c9577-q85rh" is not "Ready", error: <nil>
	I1025 10:34:48.680391  492025 cli_runner.go:164] Run: docker network inspect no-preload-768303 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:34:48.696692  492025 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1025 10:34:48.700522  492025 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:34:48.710142  492025 kubeadm.go:883] updating cluster {Name:no-preload-768303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-768303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:34:48.710250  492025 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:34:48.710292  492025 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:34:48.735133  492025 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1025 10:34:48.735197  492025 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1025 10:34:48.735244  492025 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:34:48.735446  492025 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1025 10:34:48.735534  492025 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 10:34:48.735617  492025 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1025 10:34:48.735700  492025 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1025 10:34:48.735783  492025 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1025 10:34:48.735876  492025 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1025 10:34:48.735982  492025 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1025 10:34:48.736997  492025 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1025 10:34:48.737107  492025 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1025 10:34:48.737169  492025 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1025 10:34:48.737227  492025 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:34:48.738249  492025 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1025 10:34:48.738451  492025 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 10:34:48.738594  492025 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1025 10:34:48.738873  492025 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1025 10:34:49.002679  492025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1025 10:34:49.003259  492025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 10:34:49.008637  492025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1025 10:34:49.009175  492025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1025 10:34:49.010871  492025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1025 10:34:49.094967  492025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1025 10:34:49.138230  492025 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1025 10:34:49.138274  492025 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 10:34:49.138332  492025 ssh_runner.go:195] Run: which crictl
	I1025 10:34:49.138398  492025 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1025 10:34:49.138417  492025 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1025 10:34:49.138438  492025 ssh_runner.go:195] Run: which crictl
	I1025 10:34:49.142514  492025 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1025 10:34:49.142559  492025 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1025 10:34:49.142606  492025 ssh_runner.go:195] Run: which crictl
	I1025 10:34:49.142677  492025 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1025 10:34:49.142697  492025 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1025 10:34:49.142718  492025 ssh_runner.go:195] Run: which crictl
	I1025 10:34:49.151971  492025 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1025 10:34:49.152011  492025 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1025 10:34:49.152061  492025 ssh_runner.go:195] Run: which crictl
	I1025 10:34:49.157379  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1025 10:34:49.157445  492025 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1025 10:34:49.157481  492025 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1025 10:34:49.157512  492025 ssh_runner.go:195] Run: which crictl
	I1025 10:34:49.157563  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 10:34:49.160287  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1025 10:34:49.160658  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1025 10:34:49.163733  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1025 10:34:49.220575  492025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1025 10:34:49.253947  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 10:34:49.254104  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1025 10:34:49.254200  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1025 10:34:49.267118  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1025 10:34:49.267383  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1025 10:34:49.267479  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1025 10:34:49.300919  492025 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1025 10:34:49.300965  492025 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1025 10:34:49.301017  492025 ssh_runner.go:195] Run: which crictl
	I1025 10:34:49.376201  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1025 10:34:49.376268  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1025 10:34:49.376308  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1025 10:34:49.381321  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1025 10:34:49.381396  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1025 10:34:49.381451  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1025 10:34:49.381527  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1025 10:34:49.471702  492025 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1025 10:34:49.471805  492025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1025 10:34:49.471882  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1025 10:34:49.471927  492025 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1025 10:34:49.471979  492025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1025 10:34:49.503240  492025 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1025 10:34:49.503412  492025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1025 10:34:49.503512  492025 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1025 10:34:49.503590  492025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1025 10:34:49.503678  492025 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1025 10:34:49.503758  492025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1025 10:34:49.503883  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1025 10:34:49.521586  492025 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1025 10:34:49.521626  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1025 10:34:49.521686  492025 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1025 10:34:49.521702  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1025 10:34:49.521754  492025 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1025 10:34:49.521827  492025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1025 10:34:49.567414  492025 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1025 10:34:49.567458  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1025 10:34:49.567542  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1025 10:34:49.567585  492025 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1025 10:34:49.567601  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1025 10:34:49.567636  492025 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1025 10:34:49.567653  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1025 10:34:49.567689  492025 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1025 10:34:49.567703  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1025 10:34:49.726576  492025 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1025 10:34:49.726747  492025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	W1025 10:34:50.268182  488429 pod_ready.go:104] pod "coredns-66bc5c9577-q85rh" is not "Ready", error: <nil>
	I1025 10:34:52.763007  488429 pod_ready.go:94] pod "coredns-66bc5c9577-q85rh" is "Ready"
	I1025 10:34:52.763032  488429 pod_ready.go:86] duration metric: took 32.506926744s for pod "coredns-66bc5c9577-q85rh" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:52.767132  488429 pod_ready.go:83] waiting for pod "etcd-embed-certs-419185" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:52.773200  488429 pod_ready.go:94] pod "etcd-embed-certs-419185" is "Ready"
	I1025 10:34:52.773276  488429 pod_ready.go:86] duration metric: took 6.065799ms for pod "etcd-embed-certs-419185" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:52.776901  488429 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-419185" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:52.783181  488429 pod_ready.go:94] pod "kube-apiserver-embed-certs-419185" is "Ready"
	I1025 10:34:52.783255  488429 pod_ready.go:86] duration metric: took 6.282983ms for pod "kube-apiserver-embed-certs-419185" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:52.786004  488429 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-419185" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:52.964910  488429 pod_ready.go:94] pod "kube-controller-manager-embed-certs-419185" is "Ready"
	I1025 10:34:52.964988  488429 pod_ready.go:86] duration metric: took 178.91153ms for pod "kube-controller-manager-embed-certs-419185" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:53.159662  488429 pod_ready.go:83] waiting for pod "kube-proxy-2vqfc" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:53.559503  488429 pod_ready.go:94] pod "kube-proxy-2vqfc" is "Ready"
	I1025 10:34:53.559526  488429 pod_ready.go:86] duration metric: took 399.78635ms for pod "kube-proxy-2vqfc" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:53.759916  488429 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-419185" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:54.160296  488429 pod_ready.go:94] pod "kube-scheduler-embed-certs-419185" is "Ready"
	I1025 10:34:54.160324  488429 pod_ready.go:86] duration metric: took 400.386179ms for pod "kube-scheduler-embed-certs-419185" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:34:54.160338  488429 pod_ready.go:40] duration metric: took 33.910841121s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:34:54.235040  488429 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 10:34:54.238779  488429 out.go:179] * Done! kubectl is now configured to use "embed-certs-419185" cluster and "default" namespace by default
	I1025 10:34:49.770251  492025 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1025 10:34:49.770375  492025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1025 10:34:49.838408  492025 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1025 10:34:49.838513  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	W1025 10:34:50.045784  492025 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1025 10:34:50.045966  492025 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:34:50.290603  492025 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1025 10:34:50.290694  492025 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:34:50.290774  492025 ssh_runner.go:195] Run: which crictl
	I1025 10:34:50.290853  492025 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1025 10:34:50.293193  492025 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1025 10:34:50.293265  492025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1025 10:34:50.336113  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:34:52.399760  492025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (2.106466746s)
	I1025 10:34:52.399789  492025 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1025 10:34:52.399809  492025 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1025 10:34:52.399859  492025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1025 10:34:52.399931  492025 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.063730272s)
	I1025 10:34:52.399972  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:34:54.097345  492025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.697460606s)
	I1025 10:34:54.097373  492025 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1025 10:34:54.097437  492025 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1025 10:34:54.097408  492025 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.697420728s)
	I1025 10:34:54.097593  492025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:34:54.097496  492025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1025 10:34:55.196477  492025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.098840852s)
	I1025 10:34:55.196500  492025 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.098887531s)
	I1025 10:34:55.196543  492025 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1025 10:34:55.196507  492025 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1025 10:34:55.196613  492025 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1025 10:34:55.196637  492025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1025 10:34:55.196657  492025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1025 10:34:56.631091  492025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.434405116s)
	I1025 10:34:56.631116  492025 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1025 10:34:56.631141  492025 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1025 10:34:56.631219  492025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1025 10:34:56.631296  492025 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.434648394s)
	I1025 10:34:56.631313  492025 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1025 10:34:56.631328  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1025 10:34:58.059268  492025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.428021212s)
	I1025 10:34:58.059294  492025 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1025 10:34:58.059330  492025 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1025 10:34:58.059386  492025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1025 10:35:02.158764  492025 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (4.099355839s)
	I1025 10:35:02.158789  492025 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1025 10:35:02.158809  492025 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1025 10:35:02.158860  492025 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1025 10:35:02.724738  492025 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1025 10:35:02.724781  492025 cache_images.go:124] Successfully loaded all cached images
	I1025 10:35:02.724789  492025 cache_images.go:93] duration metric: took 13.98957548s to LoadCachedImages
	I1025 10:35:02.724800  492025 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1025 10:35:02.724910  492025 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-768303 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-768303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:35:02.725000  492025 ssh_runner.go:195] Run: crio config
	I1025 10:35:02.802164  492025 cni.go:84] Creating CNI manager for ""
	I1025 10:35:02.802232  492025 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:35:02.802273  492025 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:35:02.802329  492025 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-768303 NodeName:no-preload-768303 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:35:02.802493  492025 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-768303"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:35:02.802581  492025 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:35:02.810709  492025 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1025 10:35:02.810795  492025 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1025 10:35:02.819288  492025 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1025 10:35:02.819391  492025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1025 10:35:02.819919  492025 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21794-292167/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1025 10:35:02.820371  492025 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21794-292167/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1025 10:35:02.823960  492025 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1025 10:35:02.823995  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1025 10:35:03.673439  492025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:35:03.687858  492025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1025 10:35:03.690980  492025 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1025 10:35:03.692858  492025 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1025 10:35:03.692894  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1025 10:35:03.701705  492025 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1025 10:35:03.701749  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1025 10:35:04.348669  492025 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:35:04.357929  492025 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1025 10:35:04.372600  492025 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:35:04.387630  492025 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1025 10:35:04.402928  492025 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:35:04.407025  492025 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:35:04.421952  492025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:35:04.538870  492025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:35:04.556209  492025 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303 for IP: 192.168.85.2
	I1025 10:35:04.556249  492025 certs.go:195] generating shared ca certs ...
	I1025 10:35:04.556283  492025 certs.go:227] acquiring lock for ca certs: {Name:mk9438893ca511bbb3aa6154ee7b6f94b409696d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:35:04.556479  492025 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key
	I1025 10:35:04.556561  492025 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key
	I1025 10:35:04.556577  492025 certs.go:257] generating profile certs ...
	I1025 10:35:04.556661  492025 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/client.key
	I1025 10:35:04.556680  492025 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/client.crt with IP's: []
	I1025 10:35:04.784657  492025 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/client.crt ...
	I1025 10:35:04.784691  492025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/client.crt: {Name:mk96599ced2d7d0768690d083aec6c1c898aecac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:35:04.784939  492025 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/client.key ...
	I1025 10:35:04.784955  492025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/client.key: {Name:mk7c1f07aa13e94287c844d186ff4388b534d07f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:35:04.785099  492025 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/apiserver.key.a4ce95f1
	I1025 10:35:04.785120  492025 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/apiserver.crt.a4ce95f1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1025 10:35:05.125577  492025 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/apiserver.crt.a4ce95f1 ...
	I1025 10:35:05.125608  492025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/apiserver.crt.a4ce95f1: {Name:mk4fdc8ab16e6fe9bbd567d636f39d4c4250ab0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:35:05.125843  492025 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/apiserver.key.a4ce95f1 ...
	I1025 10:35:05.125862  492025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/apiserver.key.a4ce95f1: {Name:mk202f9c7fb018cba2d28cc27f3642722fb973c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:35:05.125962  492025 certs.go:382] copying /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/apiserver.crt.a4ce95f1 -> /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/apiserver.crt
	I1025 10:35:05.126042  492025 certs.go:386] copying /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/apiserver.key.a4ce95f1 -> /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/apiserver.key
	I1025 10:35:05.126108  492025 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/proxy-client.key
	I1025 10:35:05.126128  492025 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/proxy-client.crt with IP's: []
	I1025 10:35:05.695343  492025 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/proxy-client.crt ...
	I1025 10:35:05.695374  492025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/proxy-client.crt: {Name:mkbb9261523043a2f102738b401c36b8f899086d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:35:05.695571  492025 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/proxy-client.key ...
	I1025 10:35:05.695586  492025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/proxy-client.key: {Name:mkde20b0e60126f503d64d630c9a321a819b46e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:35:05.695816  492025 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem (1338 bytes)
	W1025 10:35:05.695863  492025 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017_empty.pem, impossibly tiny 0 bytes
	I1025 10:35:05.695878  492025 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 10:35:05.695904  492025 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem (1082 bytes)
	I1025 10:35:05.695933  492025 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:35:05.695958  492025 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem (1675 bytes)
	I1025 10:35:05.696004  492025 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 10:35:05.696572  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:35:05.717349  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:35:05.737952  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:35:05.758044  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:35:05.787772  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 10:35:05.815980  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:35:05.840583  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:35:05.881205  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:35:05.902969  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /usr/share/ca-certificates/2940172.pem (1708 bytes)
	I1025 10:35:05.922291  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:35:05.940648  492025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem --> /usr/share/ca-certificates/294017.pem (1338 bytes)
	I1025 10:35:05.964014  492025 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:35:05.980156  492025 ssh_runner.go:195] Run: openssl version
	I1025 10:35:05.988570  492025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294017.pem && ln -fs /usr/share/ca-certificates/294017.pem /etc/ssl/certs/294017.pem"
	I1025 10:35:05.997865  492025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294017.pem
	I1025 10:35:06.002885  492025 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:39 /usr/share/ca-certificates/294017.pem
	I1025 10:35:06.002958  492025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294017.pem
	I1025 10:35:06.063082  492025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294017.pem /etc/ssl/certs/51391683.0"
	I1025 10:35:06.073542  492025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940172.pem && ln -fs /usr/share/ca-certificates/2940172.pem /etc/ssl/certs/2940172.pem"
	I1025 10:35:06.083655  492025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940172.pem
	I1025 10:35:06.088288  492025 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:39 /usr/share/ca-certificates/2940172.pem
	I1025 10:35:06.088350  492025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940172.pem
	I1025 10:35:06.143275  492025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940172.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:35:06.163865  492025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:35:06.175043  492025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:35:06.179612  492025 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:35:06.179681  492025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:35:06.227876  492025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:35:06.238334  492025 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:35:06.244028  492025 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 10:35:06.244086  492025 kubeadm.go:400] StartCluster: {Name:no-preload-768303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-768303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:35:06.244168  492025 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:35:06.244252  492025 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:35:06.276732  492025 cri.go:89] found id: ""
	I1025 10:35:06.276807  492025 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:35:06.287599  492025 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 10:35:06.296684  492025 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 10:35:06.296751  492025 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 10:35:06.307626  492025 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 10:35:06.307646  492025 kubeadm.go:157] found existing configuration files:
	
	I1025 10:35:06.307697  492025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 10:35:06.317015  492025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 10:35:06.317075  492025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 10:35:06.325773  492025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 10:35:06.335219  492025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 10:35:06.335283  492025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 10:35:06.347694  492025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 10:35:06.357580  492025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 10:35:06.357635  492025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 10:35:06.366767  492025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 10:35:06.376865  492025 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 10:35:06.376942  492025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 10:35:06.385705  492025 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 10:35:06.441318  492025 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 10:35:06.442167  492025 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 10:35:06.469644  492025 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 10:35:06.469726  492025 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1025 10:35:06.469770  492025 kubeadm.go:318] OS: Linux
	I1025 10:35:06.469823  492025 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 10:35:06.469878  492025 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1025 10:35:06.469934  492025 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 10:35:06.469989  492025 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 10:35:06.470048  492025 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 10:35:06.470102  492025 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 10:35:06.470154  492025 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 10:35:06.470208  492025 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 10:35:06.470261  492025 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1025 10:35:06.561045  492025 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 10:35:06.561164  492025 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 10:35:06.561263  492025 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 10:35:06.601706  492025 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 10:35:06.608493  492025 out.go:252]   - Generating certificates and keys ...
	I1025 10:35:06.608646  492025 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 10:35:06.608729  492025 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 10:35:06.853120  492025 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 10:35:07.151532  492025 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 10:35:07.456933  492025 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 10:35:07.612494  492025 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 10:35:07.896687  492025 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 10:35:07.897206  492025 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-768303] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1025 10:35:08.641403  492025 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 10:35:08.646108  492025 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-768303] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1025 10:35:08.919353  492025 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 10:35:09.068560  492025 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	
	
	==> CRI-O <==
	Oct 25 10:34:49 embed-certs-419185 crio[646]: time="2025-10-25T10:34:49.671793402Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=22463ceb-4ef3-40b1-840b-22a53a3dac76 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:34:49 embed-certs-419185 crio[646]: time="2025-10-25T10:34:49.697336001Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=18a19bd2-b5cb-4de0-933d-5c49a22976a0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:34:49 embed-certs-419185 crio[646]: time="2025-10-25T10:34:49.697479963Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:34:49 embed-certs-419185 crio[646]: time="2025-10-25T10:34:49.711588675Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:34:49 embed-certs-419185 crio[646]: time="2025-10-25T10:34:49.711789893Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/f4ac08fb3ee3d3fc0925ae23ca1a2c519efc05e638f2cac6e46f9349b6ff43db/merged/etc/passwd: no such file or directory"
	Oct 25 10:34:49 embed-certs-419185 crio[646]: time="2025-10-25T10:34:49.711833553Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/f4ac08fb3ee3d3fc0925ae23ca1a2c519efc05e638f2cac6e46f9349b6ff43db/merged/etc/group: no such file or directory"
	Oct 25 10:34:49 embed-certs-419185 crio[646]: time="2025-10-25T10:34:49.712080728Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:34:49 embed-certs-419185 crio[646]: time="2025-10-25T10:34:49.738821846Z" level=info msg="Created container 2b1f385d3a5d618d9d725f0db4c64e38c43f1084e3fb506564a0e1fd718c2587: kube-system/storage-provisioner/storage-provisioner" id=18a19bd2-b5cb-4de0-933d-5c49a22976a0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:34:49 embed-certs-419185 crio[646]: time="2025-10-25T10:34:49.741790428Z" level=info msg="Starting container: 2b1f385d3a5d618d9d725f0db4c64e38c43f1084e3fb506564a0e1fd718c2587" id=70022a29-2ef0-439d-a02e-34fc37d24ea9 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:34:49 embed-certs-419185 crio[646]: time="2025-10-25T10:34:49.745782461Z" level=info msg="Started container" PID=1639 containerID=2b1f385d3a5d618d9d725f0db4c64e38c43f1084e3fb506564a0e1fd718c2587 description=kube-system/storage-provisioner/storage-provisioner id=70022a29-2ef0-439d-a02e-34fc37d24ea9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ca9b730073e6cf307b031f2d2abcddf87092dc4e021d2b9263922beea38f8299
	Oct 25 10:34:59 embed-certs-419185 crio[646]: time="2025-10-25T10:34:59.383216131Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:34:59 embed-certs-419185 crio[646]: time="2025-10-25T10:34:59.39337263Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:34:59 embed-certs-419185 crio[646]: time="2025-10-25T10:34:59.393581315Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:34:59 embed-certs-419185 crio[646]: time="2025-10-25T10:34:59.393667363Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:34:59 embed-certs-419185 crio[646]: time="2025-10-25T10:34:59.402659577Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:34:59 embed-certs-419185 crio[646]: time="2025-10-25T10:34:59.402848741Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:34:59 embed-certs-419185 crio[646]: time="2025-10-25T10:34:59.402926929Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:34:59 embed-certs-419185 crio[646]: time="2025-10-25T10:34:59.407305897Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:34:59 embed-certs-419185 crio[646]: time="2025-10-25T10:34:59.40749296Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:34:59 embed-certs-419185 crio[646]: time="2025-10-25T10:34:59.407597118Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:34:59 embed-certs-419185 crio[646]: time="2025-10-25T10:34:59.413883966Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:34:59 embed-certs-419185 crio[646]: time="2025-10-25T10:34:59.414112376Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:34:59 embed-certs-419185 crio[646]: time="2025-10-25T10:34:59.414233518Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:34:59 embed-certs-419185 crio[646]: time="2025-10-25T10:34:59.420197226Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:34:59 embed-certs-419185 crio[646]: time="2025-10-25T10:34:59.420578771Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	2b1f385d3a5d6       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           22 seconds ago       Running             storage-provisioner         2                   ca9b730073e6c       storage-provisioner                          kube-system
	585aabbe498b8       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago       Exited              dashboard-metrics-scraper   2                   b4dccf8ebd48b       dashboard-metrics-scraper-6ffb444bf9-95f8w   kubernetes-dashboard
	06b3300bd29e9       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   38 seconds ago       Running             kubernetes-dashboard        0                   e799e6eb94430       kubernetes-dashboard-855c9754f9-8v7z6        kubernetes-dashboard
	83e6a282d7b7e       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago       Running             busybox                     1                   34088c70975f5       busybox                                      default
	24ede3e861b57       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           53 seconds ago       Running             coredns                     1                   8a1bea1efa291       coredns-66bc5c9577-q85rh                     kube-system
	fdd9e1e639fff       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           53 seconds ago       Exited              storage-provisioner         1                   ca9b730073e6c       storage-provisioner                          kube-system
	b9b56386599ed       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           53 seconds ago       Running             kube-proxy                  1                   11d6c4771db65       kube-proxy-2vqfc                             kube-system
	0ae68418f11b6       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           53 seconds ago       Running             kindnet-cni                 1                   3196bdf8ce4e7       kindnet-4ncnd                                kube-system
	fa7fdbde79e11       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   70a51d816cbe6       kube-scheduler-embed-certs-419185            kube-system
	5d13bdf1233c7       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   514a99a4fdc52       etcd-embed-certs-419185                      kube-system
	f217878a1e424       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   9d0fdd03e4f0d       kube-controller-manager-embed-certs-419185   kube-system
	e175f67ced2de       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   c6fb22a879842       kube-apiserver-embed-certs-419185            kube-system
	
	
	==> coredns [24ede3e861b571b41dacad659ea362061f94f90095e464ea06917f9e1f4b828b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58194 - 586 "HINFO IN 1047459028914518834.7731629018044210500. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026413338s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-419185
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-419185
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=embed-certs-419185
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_32_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:32:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-419185
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:34:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:34:48 +0000   Sat, 25 Oct 2025 10:32:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:34:48 +0000   Sat, 25 Oct 2025 10:32:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:34:48 +0000   Sat, 25 Oct 2025 10:32:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:34:48 +0000   Sat, 25 Oct 2025 10:33:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-419185
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                ffdb98b4-012c-493a-a464-c37adcde7bd4
	  Boot ID:                    3729fb0b-b441-4078-a4f7-ae0fb40e9fa4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-q85rh                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m20s
	  kube-system                 etcd-embed-certs-419185                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m25s
	  kube-system                 kindnet-4ncnd                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m20s
	  kube-system                 kube-apiserver-embed-certs-419185             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-controller-manager-embed-certs-419185    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-proxy-2vqfc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-scheduler-embed-certs-419185             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-95f8w    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-8v7z6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m17s              kube-proxy       
	  Normal   Starting                 52s                kube-proxy       
	  Normal   NodeHasSufficientPID     2m25s              kubelet          Node embed-certs-419185 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 2m25s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m25s              kubelet          Node embed-certs-419185 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m25s              kubelet          Node embed-certs-419185 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m25s              kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m21s              node-controller  Node embed-certs-419185 event: Registered Node embed-certs-419185 in Controller
	  Normal   NodeReady                98s                kubelet          Node embed-certs-419185 status is now: NodeReady
	  Normal   Starting                 61s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s (x8 over 61s)  kubelet          Node embed-certs-419185 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x8 over 61s)  kubelet          Node embed-certs-419185 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s (x8 over 61s)  kubelet          Node embed-certs-419185 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           51s                node-controller  Node embed-certs-419185 event: Registered Node embed-certs-419185 in Controller
	
	
	==> dmesg <==
	[Oct25 10:13] overlayfs: idmapped layers are currently not supported
	[Oct25 10:14] overlayfs: idmapped layers are currently not supported
	[Oct25 10:15] overlayfs: idmapped layers are currently not supported
	[ +16.057450] overlayfs: idmapped layers are currently not supported
	[Oct25 10:16] overlayfs: idmapped layers are currently not supported
	[ +24.917476] overlayfs: idmapped layers are currently not supported
	[Oct25 10:17] overlayfs: idmapped layers are currently not supported
	[ +27.290615] overlayfs: idmapped layers are currently not supported
	[Oct25 10:19] overlayfs: idmapped layers are currently not supported
	[Oct25 10:20] overlayfs: idmapped layers are currently not supported
	[Oct25 10:21] overlayfs: idmapped layers are currently not supported
	[Oct25 10:22] overlayfs: idmapped layers are currently not supported
	[ +31.000692] overlayfs: idmapped layers are currently not supported
	[Oct25 10:24] overlayfs: idmapped layers are currently not supported
	[Oct25 10:26] overlayfs: idmapped layers are currently not supported
	[Oct25 10:27] overlayfs: idmapped layers are currently not supported
	[Oct25 10:28] overlayfs: idmapped layers are currently not supported
	[ +15.077774] overlayfs: idmapped layers are currently not supported
	[Oct25 10:29] overlayfs: idmapped layers are currently not supported
	[Oct25 10:30] overlayfs: idmapped layers are currently not supported
	[Oct25 10:32] overlayfs: idmapped layers are currently not supported
	[ +37.840172] overlayfs: idmapped layers are currently not supported
	[Oct25 10:33] overlayfs: idmapped layers are currently not supported
	[Oct25 10:34] overlayfs: idmapped layers are currently not supported
	[ +38.146630] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5d13bdf1233c74d26d6840451eeed0128e78110075227087c41d9d2ef0a3b0c1] <==
	{"level":"warn","ts":"2025-10-25T10:34:15.480921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:15.521048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:15.550550Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:15.573582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:15.601599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:15.622785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:15.658547Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:15.686018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:15.709612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:15.740673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:15.784583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:15.810089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:15.841602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:15.865694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:15.924761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:15.958281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:15.982363Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:16.016517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:16.040176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:16.083285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:16.178413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:16.215064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:16.238171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:16.275951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:34:16.366411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32902","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:35:12 up  2:17,  0 user,  load average: 3.14, 3.49, 3.13
	Linux embed-certs-419185 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0ae68418f11b666da7da5e8a9533b93c71476592288f12ee5e2240252976f3a9] <==
	I1025 10:34:19.180524       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:34:19.181734       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1025 10:34:19.181952       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:34:19.182562       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:34:19.182628       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:34:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:34:19.376608       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:34:19.376683       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:34:19.376715       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:34:19.377525       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 10:34:49.376914       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1025 10:34:49.377238       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 10:34:49.377374       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 10:34:49.377576       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1025 10:34:50.680842       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:34:50.680966       1 metrics.go:72] Registering metrics
	I1025 10:34:50.681048       1 controller.go:711] "Syncing nftables rules"
	I1025 10:34:59.382146       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 10:34:59.382274       1 main.go:301] handling current node
	I1025 10:35:09.384457       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1025 10:35:09.384492       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e175f67ced2debe1beebce72628c6856d49efdd71fbee71a4a521e5cb4728c33] <==
	I1025 10:34:17.812396       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 10:34:17.812473       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 10:34:17.813874       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 10:34:17.817460       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:34:17.827782       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1025 10:34:17.827851       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 10:34:17.828033       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 10:34:17.828082       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 10:34:17.842355       1 aggregator.go:171] initial CRD sync complete...
	I1025 10:34:17.842449       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 10:34:17.842479       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 10:34:17.842508       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:34:17.843560       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1025 10:34:17.884337       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 10:34:18.415634       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:34:18.454311       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:34:18.766264       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 10:34:19.052194       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:34:19.180373       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:34:19.237528       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:34:19.579202       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.144.104"}
	I1025 10:34:19.681668       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.101.66"}
	I1025 10:34:22.231250       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:34:22.331327       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:34:22.386073       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [f217878a1e424333492789b8a51f60ae7e258ef0746c75ef438b3edd64069f81] <==
	I1025 10:34:21.942201       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 10:34:21.944795       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1025 10:34:21.945875       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 10:34:21.948009       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 10:34:21.949167       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 10:34:21.950430       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1025 10:34:21.952592       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 10:34:21.952603       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 10:34:21.954697       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 10:34:21.955990       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 10:34:21.958269       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 10:34:21.958279       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 10:34:21.958671       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 10:34:21.959448       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 10:34:21.960588       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 10:34:21.961758       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 10:34:21.962496       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:34:21.964564       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1025 10:34:21.974224       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 10:34:21.974333       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 10:34:21.974424       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:34:21.974441       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 10:34:21.974448       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 10:34:21.974231       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 10:34:21.975331       1 shared_informer.go:356] "Caches are synced" controller="expand"
	
	
	==> kube-proxy [b9b56386599ed53148fe4edb01fdb3a09ac28c031475b6f0f910103b06e5915e] <==
	I1025 10:34:19.718909       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:34:19.865628       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:34:19.968392       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:34:19.968438       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1025 10:34:19.968613       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:34:19.989559       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:34:19.989985       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:34:19.997785       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:34:19.998122       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:34:19.998147       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:34:19.999673       1 config.go:200] "Starting service config controller"
	I1025 10:34:19.999696       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:34:19.999715       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:34:19.999719       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:34:19.999731       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:34:19.999736       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:34:20.000661       1 config.go:309] "Starting node config controller"
	I1025 10:34:20.000676       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:34:20.000683       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:34:20.100251       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 10:34:20.100265       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 10:34:20.100321       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [fa7fdbde79e116585ff7bd6892d6145e4f4dbd9d48734b75cf7c4527c5f3dd33] <==
	I1025 10:34:17.339840       1 serving.go:386] Generated self-signed cert in-memory
	I1025 10:34:19.737273       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 10:34:19.737321       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:34:19.752250       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1025 10:34:19.752369       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1025 10:34:19.752450       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:34:19.752482       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:34:19.752546       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:34:19.752608       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:34:19.753934       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 10:34:19.754102       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 10:34:19.853921       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:34:19.854000       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1025 10:34:19.854087       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:34:22 embed-certs-419185 kubelet[771]: I1025 10:34:22.587762     771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g7cx\" (UniqueName: \"kubernetes.io/projected/0c078832-35bc-42be-83c1-88cc29206272-kube-api-access-6g7cx\") pod \"kubernetes-dashboard-855c9754f9-8v7z6\" (UID: \"0c078832-35bc-42be-83c1-88cc29206272\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8v7z6"
	Oct 25 10:34:22 embed-certs-419185 kubelet[771]: I1025 10:34:22.587823     771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0c078832-35bc-42be-83c1-88cc29206272-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-8v7z6\" (UID: \"0c078832-35bc-42be-83c1-88cc29206272\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8v7z6"
	Oct 25 10:34:22 embed-certs-419185 kubelet[771]: I1025 10:34:22.587851     771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqhxm\" (UniqueName: \"kubernetes.io/projected/6d7d645d-d5e4-47c4-8831-c9a897f1d28d-kube-api-access-jqhxm\") pod \"dashboard-metrics-scraper-6ffb444bf9-95f8w\" (UID: \"6d7d645d-d5e4-47c4-8831-c9a897f1d28d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-95f8w"
	Oct 25 10:34:22 embed-certs-419185 kubelet[771]: I1025 10:34:22.587870     771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6d7d645d-d5e4-47c4-8831-c9a897f1d28d-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-95f8w\" (UID: \"6d7d645d-d5e4-47c4-8831-c9a897f1d28d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-95f8w"
	Oct 25 10:34:22 embed-certs-419185 kubelet[771]: W1025 10:34:22.854319     771 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/1fda185b5ef1eb2faf4fa928e32967ade3a0a627d5653f4c7f4c57d474b9fefa/crio-b4dccf8ebd48ba9982bc6370c345a8d023a1dca52219da7164aac416551026db WatchSource:0}: Error finding container b4dccf8ebd48ba9982bc6370c345a8d023a1dca52219da7164aac416551026db: Status 404 returned error can't find the container with id b4dccf8ebd48ba9982bc6370c345a8d023a1dca52219da7164aac416551026db
	Oct 25 10:34:22 embed-certs-419185 kubelet[771]: W1025 10:34:22.867872     771 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/1fda185b5ef1eb2faf4fa928e32967ade3a0a627d5653f4c7f4c57d474b9fefa/crio-e799e6eb944307a472ff228ccc90ff35cfd2feffaed5622ae85927a6bf706567 WatchSource:0}: Error finding container e799e6eb944307a472ff228ccc90ff35cfd2feffaed5622ae85927a6bf706567: Status 404 returned error can't find the container with id e799e6eb944307a472ff228ccc90ff35cfd2feffaed5622ae85927a6bf706567
	Oct 25 10:34:27 embed-certs-419185 kubelet[771]: I1025 10:34:27.603883     771 scope.go:117] "RemoveContainer" containerID="15357ef390b42a71470038909b2154b97e46edaaf0eb03502ed5f267c47949ab"
	Oct 25 10:34:28 embed-certs-419185 kubelet[771]: I1025 10:34:28.607293     771 scope.go:117] "RemoveContainer" containerID="3b1ec72f591c13c96e485ef4a123a7fc92f346c9313acd3e4ef05b4a09018a1d"
	Oct 25 10:34:28 embed-certs-419185 kubelet[771]: E1025 10:34:28.607437     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-95f8w_kubernetes-dashboard(6d7d645d-d5e4-47c4-8831-c9a897f1d28d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-95f8w" podUID="6d7d645d-d5e4-47c4-8831-c9a897f1d28d"
	Oct 25 10:34:28 embed-certs-419185 kubelet[771]: I1025 10:34:28.610634     771 scope.go:117] "RemoveContainer" containerID="15357ef390b42a71470038909b2154b97e46edaaf0eb03502ed5f267c47949ab"
	Oct 25 10:34:29 embed-certs-419185 kubelet[771]: I1025 10:34:29.611686     771 scope.go:117] "RemoveContainer" containerID="3b1ec72f591c13c96e485ef4a123a7fc92f346c9313acd3e4ef05b4a09018a1d"
	Oct 25 10:34:29 embed-certs-419185 kubelet[771]: E1025 10:34:29.611834     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-95f8w_kubernetes-dashboard(6d7d645d-d5e4-47c4-8831-c9a897f1d28d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-95f8w" podUID="6d7d645d-d5e4-47c4-8831-c9a897f1d28d"
	Oct 25 10:34:32 embed-certs-419185 kubelet[771]: I1025 10:34:32.812392     771 scope.go:117] "RemoveContainer" containerID="3b1ec72f591c13c96e485ef4a123a7fc92f346c9313acd3e4ef05b4a09018a1d"
	Oct 25 10:34:32 embed-certs-419185 kubelet[771]: E1025 10:34:32.812603     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-95f8w_kubernetes-dashboard(6d7d645d-d5e4-47c4-8831-c9a897f1d28d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-95f8w" podUID="6d7d645d-d5e4-47c4-8831-c9a897f1d28d"
	Oct 25 10:34:47 embed-certs-419185 kubelet[771]: I1025 10:34:47.485803     771 scope.go:117] "RemoveContainer" containerID="3b1ec72f591c13c96e485ef4a123a7fc92f346c9313acd3e4ef05b4a09018a1d"
	Oct 25 10:34:47 embed-certs-419185 kubelet[771]: I1025 10:34:47.659609     771 scope.go:117] "RemoveContainer" containerID="3b1ec72f591c13c96e485ef4a123a7fc92f346c9313acd3e4ef05b4a09018a1d"
	Oct 25 10:34:47 embed-certs-419185 kubelet[771]: I1025 10:34:47.659901     771 scope.go:117] "RemoveContainer" containerID="585aabbe498b8bf4b668c8c4429fb36aefd9683afaa99faabe55a1b9e126f4c9"
	Oct 25 10:34:47 embed-certs-419185 kubelet[771]: E1025 10:34:47.660048     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-95f8w_kubernetes-dashboard(6d7d645d-d5e4-47c4-8831-c9a897f1d28d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-95f8w" podUID="6d7d645d-d5e4-47c4-8831-c9a897f1d28d"
	Oct 25 10:34:47 embed-certs-419185 kubelet[771]: I1025 10:34:47.695227     771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8v7z6" podStartSLOduration=14.783778823 podStartE2EDuration="25.693796401s" podCreationTimestamp="2025-10-25 10:34:22 +0000 UTC" firstStartedPulling="2025-10-25 10:34:22.870735101 +0000 UTC m=+11.611498581" lastFinishedPulling="2025-10-25 10:34:33.780752687 +0000 UTC m=+22.521516159" observedRunningTime="2025-10-25 10:34:34.654311037 +0000 UTC m=+23.395074517" watchObservedRunningTime="2025-10-25 10:34:47.693796401 +0000 UTC m=+36.434559873"
	Oct 25 10:34:49 embed-certs-419185 kubelet[771]: I1025 10:34:49.668555     771 scope.go:117] "RemoveContainer" containerID="fdd9e1e639fff0a8eae4c6115f0fe7c18321833449082075f7eab0ca237869cc"
	Oct 25 10:34:52 embed-certs-419185 kubelet[771]: I1025 10:34:52.812103     771 scope.go:117] "RemoveContainer" containerID="585aabbe498b8bf4b668c8c4429fb36aefd9683afaa99faabe55a1b9e126f4c9"
	Oct 25 10:34:52 embed-certs-419185 kubelet[771]: E1025 10:34:52.813047     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-95f8w_kubernetes-dashboard(6d7d645d-d5e4-47c4-8831-c9a897f1d28d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-95f8w" podUID="6d7d645d-d5e4-47c4-8831-c9a897f1d28d"
	Oct 25 10:35:06 embed-certs-419185 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 10:35:06 embed-certs-419185 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 10:35:06 embed-certs-419185 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [06b3300bd29e97068f4dd4ed1769a529ce119164f2e4915858c3b1bcd3c78d18] <==
	2025/10/25 10:34:33 Using namespace: kubernetes-dashboard
	2025/10/25 10:34:33 Using in-cluster config to connect to apiserver
	2025/10/25 10:34:33 Using secret token for csrf signing
	2025/10/25 10:34:33 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 10:34:33 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 10:34:33 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 10:34:33 Generating JWE encryption key
	2025/10/25 10:34:33 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 10:34:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 10:34:34 Initializing JWE encryption key from synchronized object
	2025/10/25 10:34:34 Creating in-cluster Sidecar client
	2025/10/25 10:34:34 Serving insecurely on HTTP port: 9090
	2025/10/25 10:34:34 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:35:04 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:34:33 Starting overwatch
	
	
	==> storage-provisioner [2b1f385d3a5d618d9d725f0db4c64e38c43f1084e3fb506564a0e1fd718c2587] <==
	I1025 10:34:49.764754       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 10:34:49.791463       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 10:34:49.791577       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 10:34:49.794433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:34:53.250799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:34:57.511888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:35:01.112903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:35:04.169767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:35:07.192345       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:35:07.198081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 10:35:07.198227       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 10:35:07.198390       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-419185_8352b05a-21fd-46fc-95ec-7cf74f09f705!
	I1025 10:35:07.198441       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"24dcc85a-2e1b-4115-b38c-8d923951b052", APIVersion:"v1", ResourceVersion:"650", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-419185_8352b05a-21fd-46fc-95ec-7cf74f09f705 became leader
	W1025 10:35:07.211730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:35:07.229142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 10:35:07.302050       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-419185_8352b05a-21fd-46fc-95ec-7cf74f09f705!
	W1025 10:35:09.241796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:35:09.268972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:35:11.273943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:35:11.280024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fdd9e1e639fff0a8eae4c6115f0fe7c18321833449082075f7eab0ca237869cc] <==
	I1025 10:34:19.333931       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 10:34:49.337715       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-419185 -n embed-certs-419185
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-419185 -n embed-certs-419185: exit status 2 (506.716802ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-419185 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (7.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-768303 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-768303 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (297.573086ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:35:59Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-768303 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-768303 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-768303 describe deploy/metrics-server -n kube-system: exit status 1 (100.9232ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-768303 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-768303
helpers_test.go:243: (dbg) docker inspect no-preload-768303:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9b0b6c2f298a351a13b7790ae51ec5d870ae3d00b4490d4a3b094eb081d986c1",
	        "Created": "2025-10-25T10:34:41.024753053Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 492333,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:34:41.136791894Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/9b0b6c2f298a351a13b7790ae51ec5d870ae3d00b4490d4a3b094eb081d986c1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9b0b6c2f298a351a13b7790ae51ec5d870ae3d00b4490d4a3b094eb081d986c1/hostname",
	        "HostsPath": "/var/lib/docker/containers/9b0b6c2f298a351a13b7790ae51ec5d870ae3d00b4490d4a3b094eb081d986c1/hosts",
	        "LogPath": "/var/lib/docker/containers/9b0b6c2f298a351a13b7790ae51ec5d870ae3d00b4490d4a3b094eb081d986c1/9b0b6c2f298a351a13b7790ae51ec5d870ae3d00b4490d4a3b094eb081d986c1-json.log",
	        "Name": "/no-preload-768303",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-768303:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-768303",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9b0b6c2f298a351a13b7790ae51ec5d870ae3d00b4490d4a3b094eb081d986c1",
	                "LowerDir": "/var/lib/docker/overlay2/731d2c060601c6df8e49fb046de626ce25a10794f565bba2c8a91fcca1bcf499-init/diff:/var/lib/docker/overlay2/56244b4f60319ba406f5ebe406da3f45bd5764c167111669ae2d84794cf94518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/731d2c060601c6df8e49fb046de626ce25a10794f565bba2c8a91fcca1bcf499/merged",
	                "UpperDir": "/var/lib/docker/overlay2/731d2c060601c6df8e49fb046de626ce25a10794f565bba2c8a91fcca1bcf499/diff",
	                "WorkDir": "/var/lib/docker/overlay2/731d2c060601c6df8e49fb046de626ce25a10794f565bba2c8a91fcca1bcf499/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-768303",
	                "Source": "/var/lib/docker/volumes/no-preload-768303/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-768303",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-768303",
	                "name.minikube.sigs.k8s.io": "no-preload-768303",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e9b0702518afb247ae83ebcdfbaf534894f5703d1e5825bd9b1c89302851a601",
	            "SandboxKey": "/var/run/docker/netns/e9b0702518af",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-768303": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:e6:8e:bf:e7:e0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "859ef893d5c6367e34b4500fcc3b03774bcaafce1067944be65176cec7fd385b",
	                    "EndpointID": "ff0adac0c032cb5099132bbb5d83f1fa1677af9ac7d2435bc4a02a11f4eb60df",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-768303",
	                        "9b0b6c2f298a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-768303 -n no-preload-768303
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-768303 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-768303 logs -n 25: (2.01408229s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-610853                                                                                                                                                                                                                     │ old-k8s-version-610853       │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │ 25 Oct 25 10:31 UTC │
	│ start   │ -p default-k8s-diff-port-204074 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │ 25 Oct 25 10:33 UTC │
	│ start   │ -p cert-expiration-313068 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-313068       │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │ 25 Oct 25 10:32 UTC │
	│ delete  │ -p cert-expiration-313068                                                                                                                                                                                                                     │ cert-expiration-313068       │ jenkins │ v1.37.0 │ 25 Oct 25 10:32 UTC │ 25 Oct 25 10:32 UTC │
	│ start   │ -p embed-certs-419185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:32 UTC │ 25 Oct 25 10:33 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-204074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-204074 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │ 25 Oct 25 10:33 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-204074 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │ 25 Oct 25 10:33 UTC │
	│ start   │ -p default-k8s-diff-port-204074 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │ 25 Oct 25 10:34 UTC │
	│ addons  │ enable metrics-server -p embed-certs-419185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │                     │
	│ stop    │ -p embed-certs-419185 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │ 25 Oct 25 10:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-419185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ start   │ -p embed-certs-419185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ image   │ default-k8s-diff-port-204074 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ pause   │ -p default-k8s-diff-port-204074 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-204074                                                                                                                                                                                                               │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ delete  │ -p default-k8s-diff-port-204074                                                                                                                                                                                                               │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ delete  │ -p disable-driver-mounts-533631                                                                                                                                                                                                               │ disable-driver-mounts-533631 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ start   │ -p no-preload-768303 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-768303            │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:35 UTC │
	│ image   │ embed-certs-419185 image list --format=json                                                                                                                                                                                                   │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │ 25 Oct 25 10:35 UTC │
	│ pause   │ -p embed-certs-419185 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │                     │
	│ delete  │ -p embed-certs-419185                                                                                                                                                                                                                         │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │ 25 Oct 25 10:35 UTC │
	│ delete  │ -p embed-certs-419185                                                                                                                                                                                                                         │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │ 25 Oct 25 10:35 UTC │
	│ start   │ -p newest-cni-491554 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-491554            │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-768303 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-768303            │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:35:17
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:35:17.591860  496139 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:35:17.592429  496139 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:35:17.592443  496139 out.go:374] Setting ErrFile to fd 2...
	I1025 10:35:17.592448  496139 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:35:17.592831  496139 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 10:35:17.593399  496139 out.go:368] Setting JSON to false
	I1025 10:35:17.594390  496139 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8268,"bootTime":1761380250,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1025 10:35:17.594521  496139 start.go:141] virtualization:  
	I1025 10:35:17.598586  496139 out.go:179] * [newest-cni-491554] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:35:17.601963  496139 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 10:35:17.602006  496139 notify.go:220] Checking for updates...
	I1025 10:35:17.608563  496139 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:35:17.611685  496139 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:35:17.614626  496139 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-292167/.minikube
	I1025 10:35:17.618070  496139 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:35:17.620961  496139 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:35:17.624310  496139 config.go:182] Loaded profile config "no-preload-768303": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:35:17.624458  496139 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:35:17.671958  496139 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:35:17.672090  496139 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:35:17.796172  496139 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:60 SystemTime:2025-10-25 10:35:17.783297005 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:35:17.796297  496139 docker.go:318] overlay module found
	I1025 10:35:17.799455  496139 out.go:179] * Using the docker driver based on user configuration
	I1025 10:35:17.802323  496139 start.go:305] selected driver: docker
	I1025 10:35:17.802344  496139 start.go:925] validating driver "docker" against <nil>
	I1025 10:35:17.802373  496139 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:35:17.803074  496139 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:35:17.910349  496139 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:60 SystemTime:2025-10-25 10:35:17.899231758 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:35:17.910514  496139 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1025 10:35:17.910536  496139 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1025 10:35:17.910755  496139 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 10:35:17.913814  496139 out.go:179] * Using Docker driver with root privileges
	I1025 10:35:17.916795  496139 cni.go:84] Creating CNI manager for ""
	I1025 10:35:17.916872  496139 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:35:17.916885  496139 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 10:35:17.916960  496139 start.go:349] cluster config:
	{Name:newest-cni-491554 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-491554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:35:17.920200  496139 out.go:179] * Starting "newest-cni-491554" primary control-plane node in "newest-cni-491554" cluster
	I1025 10:35:17.923095  496139 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:35:17.926037  496139 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:35:17.928921  496139 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:35:17.928984  496139 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 10:35:17.928999  496139 cache.go:58] Caching tarball of preloaded images
	I1025 10:35:17.929084  496139 preload.go:233] Found /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:35:17.929100  496139 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:35:17.929224  496139 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/config.json ...
	I1025 10:35:17.929247  496139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/config.json: {Name:mk30af115cc70131ab70ab52b597c60671b064da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:35:17.929418  496139 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:35:17.957822  496139 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:35:17.957851  496139 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:35:17.957865  496139 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:35:17.957889  496139 start.go:360] acquireMachinesLock for newest-cni-491554: {Name:mk0633ca83cb1f39b8a26429220857914907c494 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:35:17.957997  496139 start.go:364] duration metric: took 86.303µs to acquireMachinesLock for "newest-cni-491554"
	I1025 10:35:17.958030  496139 start.go:93] Provisioning new machine with config: &{Name:newest-cni-491554 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-491554 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:35:17.958112  496139 start.go:125] createHost starting for "" (driver="docker")
	I1025 10:35:14.891635  492025 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 10:35:14.891759  492025 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 10:35:15.887723  492025 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001843151s
	I1025 10:35:15.892627  492025 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 10:35:15.892726  492025 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1025 10:35:15.893020  492025 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 10:35:15.893109  492025 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 10:35:19.360909  492025 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.467690796s
	I1025 10:35:17.961560  496139 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 10:35:17.961810  496139 start.go:159] libmachine.API.Create for "newest-cni-491554" (driver="docker")
	I1025 10:35:17.961859  496139 client.go:168] LocalClient.Create starting
	I1025 10:35:17.961946  496139 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem
	I1025 10:35:17.961986  496139 main.go:141] libmachine: Decoding PEM data...
	I1025 10:35:17.962002  496139 main.go:141] libmachine: Parsing certificate...
	I1025 10:35:17.962061  496139 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem
	I1025 10:35:17.962083  496139 main.go:141] libmachine: Decoding PEM data...
	I1025 10:35:17.962098  496139 main.go:141] libmachine: Parsing certificate...
	I1025 10:35:17.962468  496139 cli_runner.go:164] Run: docker network inspect newest-cni-491554 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 10:35:17.991312  496139 cli_runner.go:211] docker network inspect newest-cni-491554 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 10:35:17.991400  496139 network_create.go:284] running [docker network inspect newest-cni-491554] to gather additional debugging logs...
	I1025 10:35:17.991422  496139 cli_runner.go:164] Run: docker network inspect newest-cni-491554
	W1025 10:35:18.016422  496139 cli_runner.go:211] docker network inspect newest-cni-491554 returned with exit code 1
	I1025 10:35:18.016460  496139 network_create.go:287] error running [docker network inspect newest-cni-491554]: docker network inspect newest-cni-491554: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-491554 not found
	I1025 10:35:18.016487  496139 network_create.go:289] output of [docker network inspect newest-cni-491554]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-491554 not found
	
	** /stderr **
	I1025 10:35:18.016589  496139 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:35:18.051574  496139 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-101b69e1e09b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d2:49:dd:18:ab:21} reservation:<nil>}
	I1025 10:35:18.051881  496139 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fee160f0176f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:62:3b:42:f8:f5:b2} reservation:<nil>}
	I1025 10:35:18.052220  496139 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5368f13f34ad IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2e:30:83:3d:80:30} reservation:<nil>}
	I1025 10:35:18.052634  496139 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001977360}
	I1025 10:35:18.052655  496139 network_create.go:124] attempt to create docker network newest-cni-491554 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 10:35:18.052713  496139 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-491554 newest-cni-491554
	I1025 10:35:18.150078  496139 network_create.go:108] docker network newest-cni-491554 192.168.76.0/24 created
	I1025 10:35:18.150108  496139 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-491554" container
	I1025 10:35:18.150181  496139 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 10:35:18.170461  496139 cli_runner.go:164] Run: docker volume create newest-cni-491554 --label name.minikube.sigs.k8s.io=newest-cni-491554 --label created_by.minikube.sigs.k8s.io=true
	I1025 10:35:18.192165  496139 oci.go:103] Successfully created a docker volume newest-cni-491554
	I1025 10:35:18.192256  496139 cli_runner.go:164] Run: docker run --rm --name newest-cni-491554-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-491554 --entrypoint /usr/bin/test -v newest-cni-491554:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 10:35:18.922178  496139 oci.go:107] Successfully prepared a docker volume newest-cni-491554
	I1025 10:35:18.922230  496139 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:35:18.922250  496139 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 10:35:18.922328  496139 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-491554:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 10:35:21.180361  492025 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.287686538s
	I1025 10:35:23.394664  492025 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.501897785s
	I1025 10:35:23.457372  492025 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 10:35:23.511022  492025 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 10:35:23.595073  492025 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 10:35:23.595586  492025 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-768303 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 10:35:23.630459  492025 kubeadm.go:318] [bootstrap-token] Using token: c9xqcz.fi2iogmqoucis458
	I1025 10:35:23.656117  492025 out.go:252]   - Configuring RBAC rules ...
	I1025 10:35:23.656276  492025 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 10:35:23.674548  492025 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 10:35:23.690336  492025 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 10:35:23.714970  492025 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 10:35:23.720888  492025 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 10:35:23.728883  492025 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 10:35:23.805150  492025 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 10:35:24.268308  492025 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 10:35:24.816708  492025 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 10:35:24.816729  492025 kubeadm.go:318] 
	I1025 10:35:24.816810  492025 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 10:35:24.816816  492025 kubeadm.go:318] 
	I1025 10:35:24.816897  492025 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 10:35:24.816902  492025 kubeadm.go:318] 
	I1025 10:35:24.816928  492025 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 10:35:24.816989  492025 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 10:35:24.817042  492025 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 10:35:24.817047  492025 kubeadm.go:318] 
	I1025 10:35:24.817102  492025 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 10:35:24.817112  492025 kubeadm.go:318] 
	I1025 10:35:24.817162  492025 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 10:35:24.817167  492025 kubeadm.go:318] 
	I1025 10:35:24.817221  492025 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 10:35:24.817298  492025 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 10:35:24.817369  492025 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 10:35:24.817373  492025 kubeadm.go:318] 
	I1025 10:35:24.817461  492025 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 10:35:24.817576  492025 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 10:35:24.817583  492025 kubeadm.go:318] 
	I1025 10:35:24.817675  492025 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token c9xqcz.fi2iogmqoucis458 \
	I1025 10:35:24.817784  492025 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3fbde57ac93e639221cdfe118779e4c6254e9915ac4b07aafbe95a9f86bce8d0 \
	I1025 10:35:24.817806  492025 kubeadm.go:318] 	--control-plane 
	I1025 10:35:24.817811  492025 kubeadm.go:318] 
	I1025 10:35:24.817899  492025 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 10:35:24.817903  492025 kubeadm.go:318] 
	I1025 10:35:24.817988  492025 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token c9xqcz.fi2iogmqoucis458 \
	I1025 10:35:24.818094  492025 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3fbde57ac93e639221cdfe118779e4c6254e9915ac4b07aafbe95a9f86bce8d0 
	I1025 10:35:24.831135  492025 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1025 10:35:24.831385  492025 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1025 10:35:24.831494  492025 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 10:35:24.831511  492025 cni.go:84] Creating CNI manager for ""
	I1025 10:35:24.831519  492025 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:35:24.835277  492025 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 10:35:23.911777  496139 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-491554:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.989413369s)
	I1025 10:35:23.911808  496139 kic.go:203] duration metric: took 4.989554139s to extract preloaded images to volume ...
	W1025 10:35:23.911947  496139 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1025 10:35:23.912053  496139 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 10:35:24.016101  496139 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-491554 --name newest-cni-491554 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-491554 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-491554 --network newest-cni-491554 --ip 192.168.76.2 --volume newest-cni-491554:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 10:35:24.436537  496139 cli_runner.go:164] Run: docker container inspect newest-cni-491554 --format={{.State.Running}}
	I1025 10:35:24.465635  496139 cli_runner.go:164] Run: docker container inspect newest-cni-491554 --format={{.State.Status}}
	I1025 10:35:24.490039  496139 cli_runner.go:164] Run: docker exec newest-cni-491554 stat /var/lib/dpkg/alternatives/iptables
	I1025 10:35:24.546739  496139 oci.go:144] the created container "newest-cni-491554" has a running status.
	I1025 10:35:24.546774  496139 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21794-292167/.minikube/machines/newest-cni-491554/id_rsa...
	I1025 10:35:24.756411  496139 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21794-292167/.minikube/machines/newest-cni-491554/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 10:35:24.785491  496139 cli_runner.go:164] Run: docker container inspect newest-cni-491554 --format={{.State.Status}}
	I1025 10:35:24.818343  496139 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 10:35:24.818402  496139 kic_runner.go:114] Args: [docker exec --privileged newest-cni-491554 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 10:35:24.889465  496139 cli_runner.go:164] Run: docker container inspect newest-cni-491554 --format={{.State.Status}}
	I1025 10:35:24.927666  496139 machine.go:93] provisionDockerMachine start ...
	I1025 10:35:24.927765  496139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-491554
	I1025 10:35:24.959350  496139 main.go:141] libmachine: Using SSH client type: native
	I1025 10:35:24.959702  496139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33457 <nil> <nil>}
	I1025 10:35:24.959715  496139 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:35:24.961983  496139 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41104->127.0.0.1:33457: read: connection reset by peer
	I1025 10:35:24.838955  492025 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 10:35:24.848564  492025 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 10:35:24.848584  492025 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 10:35:24.907979  492025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 10:35:25.593910  492025 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 10:35:25.594047  492025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:35:25.594119  492025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-768303 minikube.k8s.io/updated_at=2025_10_25T10_35_25_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53 minikube.k8s.io/name=no-preload-768303 minikube.k8s.io/primary=true
	I1025 10:35:25.877364  492025 ops.go:34] apiserver oom_adj: -16
	I1025 10:35:25.877471  492025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:35:26.377813  492025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:35:26.878571  492025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:35:27.378436  492025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:35:27.878570  492025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:35:28.377588  492025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:35:28.877566  492025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:35:29.377580  492025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:35:29.878459  492025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:35:30.378179  492025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:35:30.552245  492025 kubeadm.go:1113] duration metric: took 4.958242676s to wait for elevateKubeSystemPrivileges
	I1025 10:35:30.552279  492025 kubeadm.go:402] duration metric: took 24.308197084s to StartCluster
	I1025 10:35:30.552299  492025 settings.go:142] acquiring lock: {Name:mkea67a33832f2ea491a4c745ccd836174c61655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:35:30.552369  492025 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:35:30.553012  492025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/kubeconfig: {Name:mk3dba620aff9c31a3afd43d9db4d9ff5be75367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:35:30.553228  492025 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:35:30.553317  492025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 10:35:30.553559  492025 config.go:182] Loaded profile config "no-preload-768303": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:35:30.553591  492025 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:35:30.553652  492025 addons.go:69] Setting storage-provisioner=true in profile "no-preload-768303"
	I1025 10:35:30.553665  492025 addons.go:238] Setting addon storage-provisioner=true in "no-preload-768303"
	I1025 10:35:30.553687  492025 host.go:66] Checking if "no-preload-768303" exists ...
	I1025 10:35:30.554173  492025 cli_runner.go:164] Run: docker container inspect no-preload-768303 --format={{.State.Status}}
	I1025 10:35:30.554833  492025 addons.go:69] Setting default-storageclass=true in profile "no-preload-768303"
	I1025 10:35:30.554854  492025 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-768303"
	I1025 10:35:30.555131  492025 cli_runner.go:164] Run: docker container inspect no-preload-768303 --format={{.State.Status}}
	I1025 10:35:30.557512  492025 out.go:179] * Verifying Kubernetes components...
	I1025 10:35:30.562952  492025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:35:30.595845  492025 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:35:28.119005  496139 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-491554
	
	I1025 10:35:28.119033  496139 ubuntu.go:182] provisioning hostname "newest-cni-491554"
	I1025 10:35:28.119097  496139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-491554
	I1025 10:35:28.136646  496139 main.go:141] libmachine: Using SSH client type: native
	I1025 10:35:28.136961  496139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33457 <nil> <nil>}
	I1025 10:35:28.137067  496139 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-491554 && echo "newest-cni-491554" | sudo tee /etc/hostname
	I1025 10:35:28.299633  496139 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-491554
	
	I1025 10:35:28.299730  496139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-491554
	I1025 10:35:28.318561  496139 main.go:141] libmachine: Using SSH client type: native
	I1025 10:35:28.318851  496139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33457 <nil> <nil>}
	I1025 10:35:28.318867  496139 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-491554' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-491554/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-491554' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:35:28.475704  496139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:35:28.475733  496139 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-292167/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-292167/.minikube}
	I1025 10:35:28.475762  496139 ubuntu.go:190] setting up certificates
	I1025 10:35:28.475786  496139 provision.go:84] configureAuth start
	I1025 10:35:28.475855  496139 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-491554
	I1025 10:35:28.500918  496139 provision.go:143] copyHostCerts
	I1025 10:35:28.500989  496139 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem, removing ...
	I1025 10:35:28.501002  496139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem
	I1025 10:35:28.501093  496139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem (1082 bytes)
	I1025 10:35:28.501203  496139 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem, removing ...
	I1025 10:35:28.501216  496139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem
	I1025 10:35:28.501246  496139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem (1123 bytes)
	I1025 10:35:28.501320  496139 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem, removing ...
	I1025 10:35:28.501330  496139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem
	I1025 10:35:28.501356  496139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem (1675 bytes)
	I1025 10:35:28.501424  496139 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem org=jenkins.newest-cni-491554 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-491554]
	I1025 10:35:29.112188  496139 provision.go:177] copyRemoteCerts
	I1025 10:35:29.112262  496139 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:35:29.112306  496139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-491554
	I1025 10:35:29.130713  496139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33457 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/newest-cni-491554/id_rsa Username:docker}
	I1025 10:35:29.243583  496139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 10:35:29.264745  496139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 10:35:29.285192  496139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 10:35:29.305548  496139 provision.go:87] duration metric: took 829.739078ms to configureAuth
	I1025 10:35:29.305621  496139 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:35:29.305853  496139 config.go:182] Loaded profile config "newest-cni-491554": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:35:29.305973  496139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-491554
	I1025 10:35:29.323232  496139 main.go:141] libmachine: Using SSH client type: native
	I1025 10:35:29.323713  496139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33457 <nil> <nil>}
	I1025 10:35:29.323754  496139 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:35:29.621453  496139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:35:29.621479  496139 machine.go:96] duration metric: took 4.693793086s to provisionDockerMachine
	I1025 10:35:29.621490  496139 client.go:171] duration metric: took 11.65962061s to LocalClient.Create
	I1025 10:35:29.621508  496139 start.go:167] duration metric: took 11.659700045s to libmachine.API.Create "newest-cni-491554"
	I1025 10:35:29.621516  496139 start.go:293] postStartSetup for "newest-cni-491554" (driver="docker")
	I1025 10:35:29.621535  496139 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:35:29.621609  496139 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:35:29.621661  496139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-491554
	I1025 10:35:29.645657  496139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33457 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/newest-cni-491554/id_rsa Username:docker}
	I1025 10:35:29.756043  496139 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:35:29.759261  496139 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:35:29.759290  496139 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:35:29.759307  496139 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/addons for local assets ...
	I1025 10:35:29.759361  496139 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/files for local assets ...
	I1025 10:35:29.759444  496139 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem -> 2940172.pem in /etc/ssl/certs
	I1025 10:35:29.759553  496139 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:35:29.767655  496139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 10:35:29.788585  496139 start.go:296] duration metric: took 167.052865ms for postStartSetup
	I1025 10:35:29.788971  496139 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-491554
	I1025 10:35:29.815581  496139 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/config.json ...
	I1025 10:35:29.815868  496139 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:35:29.815922  496139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-491554
	I1025 10:35:29.836737  496139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33457 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/newest-cni-491554/id_rsa Username:docker}
	I1025 10:35:29.947245  496139 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:35:29.953882  496139 start.go:128] duration metric: took 11.995754247s to createHost
	I1025 10:35:29.953911  496139 start.go:83] releasing machines lock for "newest-cni-491554", held for 11.995898199s
	I1025 10:35:29.953988  496139 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-491554
	I1025 10:35:29.978333  496139 ssh_runner.go:195] Run: cat /version.json
	I1025 10:35:29.978387  496139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-491554
	I1025 10:35:29.978408  496139 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:35:29.978468  496139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-491554
	I1025 10:35:30.052866  496139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33457 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/newest-cni-491554/id_rsa Username:docker}
	I1025 10:35:30.068194  496139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33457 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/newest-cni-491554/id_rsa Username:docker}
	I1025 10:35:30.196378  496139 ssh_runner.go:195] Run: systemctl --version
	I1025 10:35:30.326923  496139 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:35:30.389315  496139 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:35:30.398713  496139 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:35:30.398783  496139 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:35:30.437805  496139 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1025 10:35:30.437891  496139 start.go:495] detecting cgroup driver to use...
	I1025 10:35:30.437961  496139 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:35:30.438048  496139 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:35:30.468740  496139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:35:30.485449  496139 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:35:30.485561  496139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:35:30.509982  496139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:35:30.531298  496139 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:35:30.835245  496139 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:35:31.062579  496139 docker.go:234] disabling docker service ...
	I1025 10:35:31.062646  496139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:35:31.102501  496139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:35:31.126074  496139 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:35:31.345827  496139 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:35:31.588885  496139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:35:31.613386  496139 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:35:31.648271  496139 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:35:31.648362  496139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:35:31.661363  496139 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:35:31.661432  496139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:35:31.681547  496139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:35:31.691381  496139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:35:31.705660  496139 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:35:31.714668  496139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:35:31.725971  496139 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:35:31.748519  496139 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:35:31.760644  496139 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:35:31.771397  496139 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:35:31.781756  496139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:35:31.985290  496139 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:35:32.176876  496139 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:35:32.176950  496139 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:35:32.185497  496139 start.go:563] Will wait 60s for crictl version
	I1025 10:35:32.185575  496139 ssh_runner.go:195] Run: which crictl
	I1025 10:35:32.189580  496139 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:35:32.234289  496139 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:35:32.234384  496139 ssh_runner.go:195] Run: crio --version
	I1025 10:35:32.288207  496139 ssh_runner.go:195] Run: crio --version
	I1025 10:35:32.343418  496139 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:35:32.345571  496139 cli_runner.go:164] Run: docker network inspect newest-cni-491554 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:35:32.374503  496139 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1025 10:35:32.378685  496139 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:35:32.398584  496139 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1025 10:35:32.401305  496139 kubeadm.go:883] updating cluster {Name:newest-cni-491554 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-491554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:35:32.401432  496139 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:35:32.401518  496139 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:35:32.454728  496139 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:35:32.454752  496139 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:35:32.454810  496139 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:35:32.513414  496139 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:35:32.513438  496139 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:35:32.513446  496139 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1025 10:35:32.513536  496139 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-491554 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-491554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:35:32.513628  496139 ssh_runner.go:195] Run: crio config
	I1025 10:35:30.598253  492025 addons.go:238] Setting addon default-storageclass=true in "no-preload-768303"
	I1025 10:35:30.598299  492025 host.go:66] Checking if "no-preload-768303" exists ...
	I1025 10:35:30.598739  492025 cli_runner.go:164] Run: docker container inspect no-preload-768303 --format={{.State.Status}}
	I1025 10:35:30.598918  492025 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:35:30.598935  492025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:35:30.598987  492025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:35:30.659052  492025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33452 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/no-preload-768303/id_rsa Username:docker}
	I1025 10:35:30.661715  492025 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:35:30.661739  492025 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:35:30.661808  492025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:35:30.695416  492025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33452 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/no-preload-768303/id_rsa Username:docker}
	I1025 10:35:31.088653  492025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:35:31.144865  492025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 10:35:31.144985  492025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:35:31.175208  492025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:35:32.291296  492025 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.146395019s)
	I1025 10:35:32.291321  492025 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1025 10:35:32.292250  492025 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.147249261s)
	I1025 10:35:32.292866  492025 node_ready.go:35] waiting up to 6m0s for node "no-preload-768303" to be "Ready" ...
	I1025 10:35:32.799270  492025 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-768303" context rescaled to 1 replicas
	I1025 10:35:32.841641  492025 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.666346057s)
	I1025 10:35:32.844704  492025 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1025 10:35:32.847637  492025 addons.go:514] duration metric: took 2.294021528s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1025 10:35:34.297688  492025 node_ready.go:57] node "no-preload-768303" has "Ready":"False" status (will retry)
	I1025 10:35:32.597881  496139 cni.go:84] Creating CNI manager for ""
	I1025 10:35:32.597900  496139 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:35:32.597922  496139 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1025 10:35:32.597946  496139 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-491554 NodeName:newest-cni-491554 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:35:32.598066  496139 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-491554"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:35:32.598133  496139 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:35:32.607593  496139 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:35:32.607663  496139 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:35:32.616697  496139 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1025 10:35:32.637488  496139 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:35:32.659407  496139 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1025 10:35:32.682289  496139 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:35:32.687442  496139 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:35:32.699233  496139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:35:32.897164  496139 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:35:32.938964  496139 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554 for IP: 192.168.76.2
	I1025 10:35:32.938984  496139 certs.go:195] generating shared ca certs ...
	I1025 10:35:32.939001  496139 certs.go:227] acquiring lock for ca certs: {Name:mk9438893ca511bbb3aa6154ee7b6f94b409696d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:35:32.939218  496139 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key
	I1025 10:35:32.939285  496139 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key
	I1025 10:35:32.939299  496139 certs.go:257] generating profile certs ...
	I1025 10:35:32.939420  496139 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/client.key
	I1025 10:35:32.939446  496139 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/client.crt with IP's: []
	I1025 10:35:34.216920  496139 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/client.crt ...
	I1025 10:35:34.216950  496139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/client.crt: {Name:mk512ce90ddbdbbfd5ecabfbda6bc1400fb538c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:35:34.217112  496139 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/client.key ...
	I1025 10:35:34.217129  496139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/client.key: {Name:mk37698c313a90d602b9cd8e52667fe080d096e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:35:34.217225  496139 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/apiserver.key.1df2bdda
	I1025 10:35:34.217243  496139 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/apiserver.crt.1df2bdda with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1025 10:35:34.922846  496139 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/apiserver.crt.1df2bdda ...
	I1025 10:35:34.922878  496139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/apiserver.crt.1df2bdda: {Name:mk0bc9ab90fa8bde62384ac873795799edbe0266 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:35:34.923114  496139 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/apiserver.key.1df2bdda ...
	I1025 10:35:34.923132  496139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/apiserver.key.1df2bdda: {Name:mka83abd3b7d52bb94c96307e96f984b99cd06e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:35:34.923258  496139 certs.go:382] copying /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/apiserver.crt.1df2bdda -> /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/apiserver.crt
	I1025 10:35:34.923344  496139 certs.go:386] copying /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/apiserver.key.1df2bdda -> /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/apiserver.key
	I1025 10:35:34.923409  496139 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/proxy-client.key
	I1025 10:35:34.923430  496139 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/proxy-client.crt with IP's: []
	I1025 10:35:35.371774  496139 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/proxy-client.crt ...
	I1025 10:35:35.371806  496139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/proxy-client.crt: {Name:mk7daa5b71a10a3820810a893d97f214371b9594 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:35:35.371974  496139 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/proxy-client.key ...
	I1025 10:35:35.372000  496139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/proxy-client.key: {Name:mk2248c415d6104d54a2a78442edd92357c31ab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:35:35.372186  496139 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem (1338 bytes)
	W1025 10:35:35.372233  496139 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017_empty.pem, impossibly tiny 0 bytes
	I1025 10:35:35.372247  496139 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 10:35:35.372273  496139 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem (1082 bytes)
	I1025 10:35:35.372299  496139 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:35:35.372326  496139 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem (1675 bytes)
	I1025 10:35:35.372382  496139 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 10:35:35.373007  496139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:35:35.395988  496139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:35:35.416733  496139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:35:35.437607  496139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:35:35.460605  496139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 10:35:35.480944  496139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 10:35:35.501021  496139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:35:35.520082  496139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:35:35.539594  496139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /usr/share/ca-certificates/2940172.pem (1708 bytes)
	I1025 10:35:35.559047  496139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:35:35.578295  496139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem --> /usr/share/ca-certificates/294017.pem (1338 bytes)
	I1025 10:35:35.598792  496139 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:35:35.616514  496139 ssh_runner.go:195] Run: openssl version
	I1025 10:35:35.623770  496139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294017.pem && ln -fs /usr/share/ca-certificates/294017.pem /etc/ssl/certs/294017.pem"
	I1025 10:35:35.636912  496139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294017.pem
	I1025 10:35:35.641643  496139 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:39 /usr/share/ca-certificates/294017.pem
	I1025 10:35:35.641722  496139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294017.pem
	I1025 10:35:35.688998  496139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294017.pem /etc/ssl/certs/51391683.0"
	I1025 10:35:35.699506  496139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940172.pem && ln -fs /usr/share/ca-certificates/2940172.pem /etc/ssl/certs/2940172.pem"
	I1025 10:35:35.713048  496139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940172.pem
	I1025 10:35:35.717115  496139 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:39 /usr/share/ca-certificates/2940172.pem
	I1025 10:35:35.717202  496139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940172.pem
	I1025 10:35:35.760659  496139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940172.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:35:35.769336  496139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:35:35.777898  496139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:35:35.782317  496139 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:35:35.782378  496139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:35:35.830779  496139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:35:35.839573  496139 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:35:35.845539  496139 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 10:35:35.845586  496139 kubeadm.go:400] StartCluster: {Name:newest-cni-491554 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-491554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:35:35.845653  496139 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:35:35.845726  496139 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:35:35.907251  496139 cri.go:89] found id: ""
	I1025 10:35:35.907409  496139 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:35:35.914963  496139 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 10:35:35.923258  496139 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 10:35:35.923374  496139 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 10:35:35.934564  496139 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 10:35:35.934634  496139 kubeadm.go:157] found existing configuration files:
	
	I1025 10:35:35.934723  496139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 10:35:35.942134  496139 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 10:35:35.942248  496139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 10:35:35.949675  496139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 10:35:35.958432  496139 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 10:35:35.958570  496139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 10:35:35.967060  496139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 10:35:35.975002  496139 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 10:35:35.975114  496139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 10:35:35.983616  496139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 10:35:35.991367  496139 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 10:35:35.991438  496139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 10:35:35.999385  496139 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 10:35:36.046548  496139 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 10:35:36.046735  496139 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 10:35:36.072203  496139 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 10:35:36.072324  496139 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1025 10:35:36.072411  496139 kubeadm.go:318] OS: Linux
	I1025 10:35:36.072490  496139 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 10:35:36.072573  496139 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1025 10:35:36.072653  496139 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 10:35:36.072738  496139 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 10:35:36.072820  496139 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 10:35:36.072904  496139 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 10:35:36.072984  496139 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 10:35:36.073070  496139 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 10:35:36.073153  496139 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1025 10:35:36.150331  496139 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 10:35:36.150455  496139 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 10:35:36.150555  496139 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 10:35:36.158861  496139 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 10:35:36.164348  496139 out.go:252]   - Generating certificates and keys ...
	I1025 10:35:36.164447  496139 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 10:35:36.164520  496139 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 10:35:36.224149  496139 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 10:35:36.448598  496139 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 10:35:36.905226  496139 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	W1025 10:35:36.312175  492025 node_ready.go:57] node "no-preload-768303" has "Ready":"False" status (will retry)
	W1025 10:35:38.797795  492025 node_ready.go:57] node "no-preload-768303" has "Ready":"False" status (will retry)
	I1025 10:35:38.088533  496139 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 10:35:38.503715  496139 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 10:35:38.504221  496139 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-491554] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 10:35:38.758714  496139 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 10:35:38.759019  496139 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-491554] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 10:35:39.322166  496139 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 10:35:39.888227  496139 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 10:35:40.514426  496139 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 10:35:40.514727  496139 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 10:35:41.199650  496139 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 10:35:42.128848  496139 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 10:35:43.243309  496139 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 10:35:43.949534  496139 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 10:35:44.259473  496139 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 10:35:44.260055  496139 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 10:35:44.262602  496139 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1025 10:35:41.297101  492025 node_ready.go:57] node "no-preload-768303" has "Ready":"False" status (will retry)
	W1025 10:35:43.297143  492025 node_ready.go:57] node "no-preload-768303" has "Ready":"False" status (will retry)
	I1025 10:35:44.265902  496139 out.go:252]   - Booting up control plane ...
	I1025 10:35:44.266010  496139 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 10:35:44.266092  496139 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 10:35:44.266161  496139 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 10:35:44.288811  496139 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 10:35:44.289369  496139 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 10:35:44.299435  496139 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 10:35:44.300212  496139 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 10:35:44.300390  496139 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 10:35:44.451625  496139 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 10:35:44.451775  496139 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 10:35:45.953470  496139 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501762581s
	I1025 10:35:45.957391  496139 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 10:35:45.957514  496139 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1025 10:35:45.957875  496139 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 10:35:45.957968  496139 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1025 10:35:45.301678  492025 node_ready.go:57] node "no-preload-768303" has "Ready":"False" status (will retry)
	I1025 10:35:46.796154  492025 node_ready.go:49] node "no-preload-768303" is "Ready"
	I1025 10:35:46.796188  492025 node_ready.go:38] duration metric: took 14.503306355s for node "no-preload-768303" to be "Ready" ...
	I1025 10:35:46.796203  492025 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:35:46.796266  492025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:35:46.813938  492025 api_server.go:72] duration metric: took 16.260672877s to wait for apiserver process to appear ...
	I1025 10:35:46.813966  492025 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:35:46.813990  492025 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 10:35:46.825736  492025 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1025 10:35:46.826798  492025 api_server.go:141] control plane version: v1.34.1
	I1025 10:35:46.826824  492025 api_server.go:131] duration metric: took 12.849832ms to wait for apiserver health ...
	I1025 10:35:46.826835  492025 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:35:46.830601  492025 system_pods.go:59] 8 kube-system pods found
	I1025 10:35:46.830648  492025 system_pods.go:61] "coredns-66bc5c9577-xpwdq" [0aaecbc4-29e5-45e1-ad80-b2465476ab96] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:35:46.830658  492025 system_pods.go:61] "etcd-no-preload-768303" [dc4f2360-5fc6-4c24-838f-e552e5061d50] Running
	I1025 10:35:46.830668  492025 system_pods.go:61] "kindnet-gkbg7" [2844e492-0201-4963-9c6c-74f19df0adea] Running
	I1025 10:35:46.830673  492025 system_pods.go:61] "kube-apiserver-no-preload-768303" [1889e503-da89-437e-b901-e173c80ee724] Running
	I1025 10:35:46.830684  492025 system_pods.go:61] "kube-controller-manager-no-preload-768303" [cc11bec4-5fcb-419f-a007-3023c54e5bd0] Running
	I1025 10:35:46.830693  492025 system_pods.go:61] "kube-proxy-m9bnn" [d1ef05c2-0d0d-43f8-9bb8-f77839881a24] Running
	I1025 10:35:46.830703  492025 system_pods.go:61] "kube-scheduler-no-preload-768303" [bb7736c6-4301-4202-b122-d1fb345fb94a] Running
	I1025 10:35:46.830708  492025 system_pods.go:61] "storage-provisioner" [89da7f26-c2be-43b2-817c-6c2621a97a30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:35:46.830716  492025 system_pods.go:74] duration metric: took 3.873564ms to wait for pod list to return data ...
	I1025 10:35:46.830728  492025 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:35:46.833432  492025 default_sa.go:45] found service account: "default"
	I1025 10:35:46.833461  492025 default_sa.go:55] duration metric: took 2.726575ms for default service account to be created ...
	I1025 10:35:46.833470  492025 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:35:46.836392  492025 system_pods.go:86] 8 kube-system pods found
	I1025 10:35:46.836424  492025 system_pods.go:89] "coredns-66bc5c9577-xpwdq" [0aaecbc4-29e5-45e1-ad80-b2465476ab96] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:35:46.836431  492025 system_pods.go:89] "etcd-no-preload-768303" [dc4f2360-5fc6-4c24-838f-e552e5061d50] Running
	I1025 10:35:46.836450  492025 system_pods.go:89] "kindnet-gkbg7" [2844e492-0201-4963-9c6c-74f19df0adea] Running
	I1025 10:35:46.836456  492025 system_pods.go:89] "kube-apiserver-no-preload-768303" [1889e503-da89-437e-b901-e173c80ee724] Running
	I1025 10:35:46.836467  492025 system_pods.go:89] "kube-controller-manager-no-preload-768303" [cc11bec4-5fcb-419f-a007-3023c54e5bd0] Running
	I1025 10:35:46.836471  492025 system_pods.go:89] "kube-proxy-m9bnn" [d1ef05c2-0d0d-43f8-9bb8-f77839881a24] Running
	I1025 10:35:46.836476  492025 system_pods.go:89] "kube-scheduler-no-preload-768303" [bb7736c6-4301-4202-b122-d1fb345fb94a] Running
	I1025 10:35:46.836489  492025 system_pods.go:89] "storage-provisioner" [89da7f26-c2be-43b2-817c-6c2621a97a30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:35:46.836507  492025 retry.go:31] will retry after 245.175314ms: missing components: kube-dns
	I1025 10:35:47.096084  492025 system_pods.go:86] 8 kube-system pods found
	I1025 10:35:47.096123  492025 system_pods.go:89] "coredns-66bc5c9577-xpwdq" [0aaecbc4-29e5-45e1-ad80-b2465476ab96] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:35:47.096135  492025 system_pods.go:89] "etcd-no-preload-768303" [dc4f2360-5fc6-4c24-838f-e552e5061d50] Running
	I1025 10:35:47.096141  492025 system_pods.go:89] "kindnet-gkbg7" [2844e492-0201-4963-9c6c-74f19df0adea] Running
	I1025 10:35:47.096147  492025 system_pods.go:89] "kube-apiserver-no-preload-768303" [1889e503-da89-437e-b901-e173c80ee724] Running
	I1025 10:35:47.096152  492025 system_pods.go:89] "kube-controller-manager-no-preload-768303" [cc11bec4-5fcb-419f-a007-3023c54e5bd0] Running
	I1025 10:35:47.096156  492025 system_pods.go:89] "kube-proxy-m9bnn" [d1ef05c2-0d0d-43f8-9bb8-f77839881a24] Running
	I1025 10:35:47.096159  492025 system_pods.go:89] "kube-scheduler-no-preload-768303" [bb7736c6-4301-4202-b122-d1fb345fb94a] Running
	I1025 10:35:47.096169  492025 system_pods.go:89] "storage-provisioner" [89da7f26-c2be-43b2-817c-6c2621a97a30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:35:47.096182  492025 retry.go:31] will retry after 327.446637ms: missing components: kube-dns
	I1025 10:35:47.443321  492025 system_pods.go:86] 8 kube-system pods found
	I1025 10:35:47.443358  492025 system_pods.go:89] "coredns-66bc5c9577-xpwdq" [0aaecbc4-29e5-45e1-ad80-b2465476ab96] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:35:47.443366  492025 system_pods.go:89] "etcd-no-preload-768303" [dc4f2360-5fc6-4c24-838f-e552e5061d50] Running
	I1025 10:35:47.443372  492025 system_pods.go:89] "kindnet-gkbg7" [2844e492-0201-4963-9c6c-74f19df0adea] Running
	I1025 10:35:47.443378  492025 system_pods.go:89] "kube-apiserver-no-preload-768303" [1889e503-da89-437e-b901-e173c80ee724] Running
	I1025 10:35:47.443383  492025 system_pods.go:89] "kube-controller-manager-no-preload-768303" [cc11bec4-5fcb-419f-a007-3023c54e5bd0] Running
	I1025 10:35:47.443387  492025 system_pods.go:89] "kube-proxy-m9bnn" [d1ef05c2-0d0d-43f8-9bb8-f77839881a24] Running
	I1025 10:35:47.443391  492025 system_pods.go:89] "kube-scheduler-no-preload-768303" [bb7736c6-4301-4202-b122-d1fb345fb94a] Running
	I1025 10:35:47.443401  492025 system_pods.go:89] "storage-provisioner" [89da7f26-c2be-43b2-817c-6c2621a97a30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:35:47.443418  492025 retry.go:31] will retry after 298.548705ms: missing components: kube-dns
	I1025 10:35:47.747559  492025 system_pods.go:86] 8 kube-system pods found
	I1025 10:35:47.747593  492025 system_pods.go:89] "coredns-66bc5c9577-xpwdq" [0aaecbc4-29e5-45e1-ad80-b2465476ab96] Running
	I1025 10:35:47.747600  492025 system_pods.go:89] "etcd-no-preload-768303" [dc4f2360-5fc6-4c24-838f-e552e5061d50] Running
	I1025 10:35:47.747605  492025 system_pods.go:89] "kindnet-gkbg7" [2844e492-0201-4963-9c6c-74f19df0adea] Running
	I1025 10:35:47.747609  492025 system_pods.go:89] "kube-apiserver-no-preload-768303" [1889e503-da89-437e-b901-e173c80ee724] Running
	I1025 10:35:47.747614  492025 system_pods.go:89] "kube-controller-manager-no-preload-768303" [cc11bec4-5fcb-419f-a007-3023c54e5bd0] Running
	I1025 10:35:47.747618  492025 system_pods.go:89] "kube-proxy-m9bnn" [d1ef05c2-0d0d-43f8-9bb8-f77839881a24] Running
	I1025 10:35:47.747622  492025 system_pods.go:89] "kube-scheduler-no-preload-768303" [bb7736c6-4301-4202-b122-d1fb345fb94a] Running
	I1025 10:35:47.747626  492025 system_pods.go:89] "storage-provisioner" [89da7f26-c2be-43b2-817c-6c2621a97a30] Running
	I1025 10:35:47.747633  492025 system_pods.go:126] duration metric: took 914.157593ms to wait for k8s-apps to be running ...
	I1025 10:35:47.747645  492025 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:35:47.747701  492025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:35:47.770357  492025 system_svc.go:56] duration metric: took 22.702207ms WaitForService to wait for kubelet
	I1025 10:35:47.770383  492025 kubeadm.go:586] duration metric: took 17.217123335s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:35:47.770403  492025 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:35:47.773820  492025 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:35:47.773865  492025 node_conditions.go:123] node cpu capacity is 2
	I1025 10:35:47.773878  492025 node_conditions.go:105] duration metric: took 3.468914ms to run NodePressure ...
	I1025 10:35:47.773900  492025 start.go:241] waiting for startup goroutines ...
	I1025 10:35:47.773914  492025 start.go:246] waiting for cluster config update ...
	I1025 10:35:47.773934  492025 start.go:255] writing updated cluster config ...
	I1025 10:35:47.774288  492025 ssh_runner.go:195] Run: rm -f paused
	I1025 10:35:47.783831  492025 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:35:47.787579  492025 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xpwdq" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:35:47.796854  492025 pod_ready.go:94] pod "coredns-66bc5c9577-xpwdq" is "Ready"
	I1025 10:35:47.796890  492025 pod_ready.go:86] duration metric: took 9.273897ms for pod "coredns-66bc5c9577-xpwdq" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:35:47.802738  492025 pod_ready.go:83] waiting for pod "etcd-no-preload-768303" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:35:47.811975  492025 pod_ready.go:94] pod "etcd-no-preload-768303" is "Ready"
	I1025 10:35:47.812001  492025 pod_ready.go:86] duration metric: took 9.229728ms for pod "etcd-no-preload-768303" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:35:47.814579  492025 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-768303" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:35:47.821178  492025 pod_ready.go:94] pod "kube-apiserver-no-preload-768303" is "Ready"
	I1025 10:35:47.821209  492025 pod_ready.go:86] duration metric: took 6.600804ms for pod "kube-apiserver-no-preload-768303" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:35:47.827372  492025 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-768303" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:35:48.188143  492025 pod_ready.go:94] pod "kube-controller-manager-no-preload-768303" is "Ready"
	I1025 10:35:48.188172  492025 pod_ready.go:86] duration metric: took 360.769381ms for pod "kube-controller-manager-no-preload-768303" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:35:48.388957  492025 pod_ready.go:83] waiting for pod "kube-proxy-m9bnn" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:35:48.788647  492025 pod_ready.go:94] pod "kube-proxy-m9bnn" is "Ready"
	I1025 10:35:48.788738  492025 pod_ready.go:86] duration metric: took 399.711479ms for pod "kube-proxy-m9bnn" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:35:48.988464  492025 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-768303" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:35:49.388263  492025 pod_ready.go:94] pod "kube-scheduler-no-preload-768303" is "Ready"
	I1025 10:35:49.388330  492025 pod_ready.go:86] duration metric: took 399.797147ms for pod "kube-scheduler-no-preload-768303" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:35:49.388357  492025 pod_ready.go:40] duration metric: took 1.604481041s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:35:49.491358  492025 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 10:35:49.494749  492025 out.go:179] * Done! kubectl is now configured to use "no-preload-768303" cluster and "default" namespace by default
	I1025 10:35:51.879418  496139 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 5.920971789s
	I1025 10:35:52.640577  496139 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.68309712s
	I1025 10:35:53.959798  496139 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 8.001926144s
	I1025 10:35:53.981857  496139 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 10:35:54.007366  496139 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 10:35:54.032732  496139 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 10:35:54.032942  496139 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-491554 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 10:35:54.046422  496139 kubeadm.go:318] [bootstrap-token] Using token: v775vr.d5u8fng82rptj6kr
	I1025 10:35:54.049341  496139 out.go:252]   - Configuring RBAC rules ...
	I1025 10:35:54.049468  496139 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 10:35:54.058531  496139 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 10:35:54.072319  496139 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 10:35:54.078266  496139 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 10:35:54.084488  496139 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 10:35:54.092651  496139 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 10:35:54.366906  496139 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 10:35:54.825169  496139 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 10:35:55.367126  496139 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 10:35:55.368310  496139 kubeadm.go:318] 
	I1025 10:35:55.368402  496139 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 10:35:55.368413  496139 kubeadm.go:318] 
	I1025 10:35:55.368495  496139 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 10:35:55.368505  496139 kubeadm.go:318] 
	I1025 10:35:55.368531  496139 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 10:35:55.368604  496139 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 10:35:55.368660  496139 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 10:35:55.368669  496139 kubeadm.go:318] 
	I1025 10:35:55.368726  496139 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 10:35:55.368734  496139 kubeadm.go:318] 
	I1025 10:35:55.368784  496139 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 10:35:55.368789  496139 kubeadm.go:318] 
	I1025 10:35:55.368843  496139 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 10:35:55.368926  496139 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 10:35:55.369003  496139 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 10:35:55.369012  496139 kubeadm.go:318] 
	I1025 10:35:55.369100  496139 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 10:35:55.369185  496139 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 10:35:55.369192  496139 kubeadm.go:318] 
	I1025 10:35:55.369301  496139 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token v775vr.d5u8fng82rptj6kr \
	I1025 10:35:55.369409  496139 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3fbde57ac93e639221cdfe118779e4c6254e9915ac4b07aafbe95a9f86bce8d0 \
	I1025 10:35:55.369432  496139 kubeadm.go:318] 	--control-plane 
	I1025 10:35:55.369437  496139 kubeadm.go:318] 
	I1025 10:35:55.369525  496139 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 10:35:55.369530  496139 kubeadm.go:318] 
	I1025 10:35:55.369615  496139 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token v775vr.d5u8fng82rptj6kr \
	I1025 10:35:55.369721  496139 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3fbde57ac93e639221cdfe118779e4c6254e9915ac4b07aafbe95a9f86bce8d0 
	I1025 10:35:55.376260  496139 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1025 10:35:55.376506  496139 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1025 10:35:55.376620  496139 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 10:35:55.376636  496139 cni.go:84] Creating CNI manager for ""
	I1025 10:35:55.376644  496139 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:35:55.379781  496139 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 10:35:55.382673  496139 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 10:35:55.387064  496139 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 10:35:55.387084  496139 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 10:35:55.402565  496139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 10:35:55.702728  496139 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 10:35:55.702828  496139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:35:55.702873  496139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-491554 minikube.k8s.io/updated_at=2025_10_25T10_35_55_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53 minikube.k8s.io/name=newest-cni-491554 minikube.k8s.io/primary=true
	I1025 10:35:55.856331  496139 ops.go:34] apiserver oom_adj: -16
	I1025 10:35:55.856442  496139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:35:56.356494  496139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:35:56.857056  496139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:35:57.357392  496139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Oct 25 10:35:47 no-preload-768303 crio[833]: time="2025-10-25T10:35:47.142107635Z" level=info msg="Created container 1b98f4aee6d7cc8ab80a79f2cc8b4407bb78b1bb19b36c1d5fa448004e7e441d: kube-system/coredns-66bc5c9577-xpwdq/coredns" id=4418e7ea-9912-4922-bc6c-4520e663ec0c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:35:47 no-preload-768303 crio[833]: time="2025-10-25T10:35:47.143587208Z" level=info msg="Starting container: 1b98f4aee6d7cc8ab80a79f2cc8b4407bb78b1bb19b36c1d5fa448004e7e441d" id=d2d314f2-8c26-42a7-a124-5b1053519de7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:35:47 no-preload-768303 crio[833]: time="2025-10-25T10:35:47.152441529Z" level=info msg="Started container" PID=2496 containerID=1b98f4aee6d7cc8ab80a79f2cc8b4407bb78b1bb19b36c1d5fa448004e7e441d description=kube-system/coredns-66bc5c9577-xpwdq/coredns id=d2d314f2-8c26-42a7-a124-5b1053519de7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f701af78b1515126ba43cea68c6458e2df31ca10463ec9d68f09f384f9b0beeb
	Oct 25 10:35:50 no-preload-768303 crio[833]: time="2025-10-25T10:35:50.091708089Z" level=info msg="Running pod sandbox: default/busybox/POD" id=33c0d6a3-721e-4a0f-a866-8e9be003e31d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:35:50 no-preload-768303 crio[833]: time="2025-10-25T10:35:50.091790601Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:35:50 no-preload-768303 crio[833]: time="2025-10-25T10:35:50.097797319Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:fcf767eec75c06a46a1f297e788f2be66fb4ead1bb0b7909d96422c3a5919e9f UID:d33e33c4-4af4-48a5-94f1-bc1b25bbdda6 NetNS:/var/run/netns/dcf4d693-a84b-47a9-bf4c-c3f07e56da3a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012dae0}] Aliases:map[]}"
	Oct 25 10:35:50 no-preload-768303 crio[833]: time="2025-10-25T10:35:50.097983735Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 25 10:35:50 no-preload-768303 crio[833]: time="2025-10-25T10:35:50.119880538Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:fcf767eec75c06a46a1f297e788f2be66fb4ead1bb0b7909d96422c3a5919e9f UID:d33e33c4-4af4-48a5-94f1-bc1b25bbdda6 NetNS:/var/run/netns/dcf4d693-a84b-47a9-bf4c-c3f07e56da3a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012dae0}] Aliases:map[]}"
	Oct 25 10:35:50 no-preload-768303 crio[833]: time="2025-10-25T10:35:50.120263707Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 25 10:35:50 no-preload-768303 crio[833]: time="2025-10-25T10:35:50.124780834Z" level=info msg="Ran pod sandbox fcf767eec75c06a46a1f297e788f2be66fb4ead1bb0b7909d96422c3a5919e9f with infra container: default/busybox/POD" id=33c0d6a3-721e-4a0f-a866-8e9be003e31d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:35:50 no-preload-768303 crio[833]: time="2025-10-25T10:35:50.128533001Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d26d6f36-46f4-46e0-bc55-369e05af69d7 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:35:50 no-preload-768303 crio[833]: time="2025-10-25T10:35:50.128942452Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=d26d6f36-46f4-46e0-bc55-369e05af69d7 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:35:50 no-preload-768303 crio[833]: time="2025-10-25T10:35:50.129070331Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=d26d6f36-46f4-46e0-bc55-369e05af69d7 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:35:50 no-preload-768303 crio[833]: time="2025-10-25T10:35:50.129977627Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=30d98326-eb6c-41bb-bf2b-1fb3b91faac3 name=/runtime.v1.ImageService/PullImage
	Oct 25 10:35:50 no-preload-768303 crio[833]: time="2025-10-25T10:35:50.13280278Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 25 10:35:52 no-preload-768303 crio[833]: time="2025-10-25T10:35:52.331598401Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=30d98326-eb6c-41bb-bf2b-1fb3b91faac3 name=/runtime.v1.ImageService/PullImage
	Oct 25 10:35:52 no-preload-768303 crio[833]: time="2025-10-25T10:35:52.332380863Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=149d79af-5662-433c-9fbe-455120ea5074 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:35:52 no-preload-768303 crio[833]: time="2025-10-25T10:35:52.336750877Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=26e5f6e0-6923-464d-b6bd-2cbdebda70a0 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:35:52 no-preload-768303 crio[833]: time="2025-10-25T10:35:52.346209376Z" level=info msg="Creating container: default/busybox/busybox" id=a174d4b8-68ff-4e56-822b-3604ebfa3925 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:35:52 no-preload-768303 crio[833]: time="2025-10-25T10:35:52.346337189Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:35:52 no-preload-768303 crio[833]: time="2025-10-25T10:35:52.355270395Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:35:52 no-preload-768303 crio[833]: time="2025-10-25T10:35:52.355776397Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:35:52 no-preload-768303 crio[833]: time="2025-10-25T10:35:52.378051545Z" level=info msg="Created container 1266f3782cac8cff0e7ea3efa4b410396ac7bc82866534794866a8222cf21db1: default/busybox/busybox" id=a174d4b8-68ff-4e56-822b-3604ebfa3925 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:35:52 no-preload-768303 crio[833]: time="2025-10-25T10:35:52.381648936Z" level=info msg="Starting container: 1266f3782cac8cff0e7ea3efa4b410396ac7bc82866534794866a8222cf21db1" id=a4728bfd-e8d8-41d3-848b-eaaffbe3a9aa name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:35:52 no-preload-768303 crio[833]: time="2025-10-25T10:35:52.385636471Z" level=info msg="Started container" PID=2547 containerID=1266f3782cac8cff0e7ea3efa4b410396ac7bc82866534794866a8222cf21db1 description=default/busybox/busybox id=a4728bfd-e8d8-41d3-848b-eaaffbe3a9aa name=/runtime.v1.RuntimeService/StartContainer sandboxID=fcf767eec75c06a46a1f297e788f2be66fb4ead1bb0b7909d96422c3a5919e9f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	1266f3782cac8       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   fcf767eec75c0       busybox                                     default
	1b98f4aee6d7c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago      Running             coredns                   0                   f701af78b1515       coredns-66bc5c9577-xpwdq                    kube-system
	8bda5f61ab21a       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      13 seconds ago      Running             storage-provisioner       0                   6538377bcfc41       storage-provisioner                         kube-system
	a2bcd2be1f642       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    25 seconds ago      Running             kindnet-cni               0                   dc92752277bc3       kindnet-gkbg7                               kube-system
	9fcf673db7acc       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      30 seconds ago      Running             kube-proxy                0                   0e973ea330f8b       kube-proxy-m9bnn                            kube-system
	232cc73f40685       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      44 seconds ago      Running             kube-apiserver            0                   f6d45e66bec36       kube-apiserver-no-preload-768303            kube-system
	9bac2895724e4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      44 seconds ago      Running             kube-scheduler            0                   deab71583a20b       kube-scheduler-no-preload-768303            kube-system
	537eacf207db5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      44 seconds ago      Running             etcd                      0                   124986b0ef3b4       etcd-no-preload-768303                      kube-system
	827fe6ea182de       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      44 seconds ago      Running             kube-controller-manager   0                   79999eb9b000a       kube-controller-manager-no-preload-768303   kube-system
	
	
	==> coredns [1b98f4aee6d7cc8ab80a79f2cc8b4407bb78b1bb19b36c1d5fa448004e7e441d] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59257 - 57702 "HINFO IN 2432045407235157834.399253618093537514. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.011765309s
	
	
	==> describe nodes <==
	Name:               no-preload-768303
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-768303
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=no-preload-768303
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_35_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:35:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-768303
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:35:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:35:55 +0000   Sat, 25 Oct 2025 10:35:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:35:55 +0000   Sat, 25 Oct 2025 10:35:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:35:55 +0000   Sat, 25 Oct 2025 10:35:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:35:55 +0000   Sat, 25 Oct 2025 10:35:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-768303
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                02b80f62-aa20-40d0-81a6-fccd316d79be
	  Boot ID:                    3729fb0b-b441-4078-a4f7-ae0fb40e9fa4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-xpwdq                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     31s
	  kube-system                 etcd-no-preload-768303                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         36s
	  kube-system                 kindnet-gkbg7                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-no-preload-768303             250m (12%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-no-preload-768303    200m (10%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-m9bnn                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-no-preload-768303             100m (5%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 29s                kube-proxy       
	  Normal   Starting                 46s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 46s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  46s (x8 over 46s)  kubelet          Node no-preload-768303 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    46s (x8 over 46s)  kubelet          Node no-preload-768303 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     46s (x8 over 46s)  kubelet          Node no-preload-768303 status is now: NodeHasSufficientPID
	  Normal   Starting                 37s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 37s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  37s                kubelet          Node no-preload-768303 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    37s                kubelet          Node no-preload-768303 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     37s                kubelet          Node no-preload-768303 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           32s                node-controller  Node no-preload-768303 event: Registered Node no-preload-768303 in Controller
	  Normal   NodeReady                15s                kubelet          Node no-preload-768303 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct25 10:14] overlayfs: idmapped layers are currently not supported
	[Oct25 10:15] overlayfs: idmapped layers are currently not supported
	[ +16.057450] overlayfs: idmapped layers are currently not supported
	[Oct25 10:16] overlayfs: idmapped layers are currently not supported
	[ +24.917476] overlayfs: idmapped layers are currently not supported
	[Oct25 10:17] overlayfs: idmapped layers are currently not supported
	[ +27.290615] overlayfs: idmapped layers are currently not supported
	[Oct25 10:19] overlayfs: idmapped layers are currently not supported
	[Oct25 10:20] overlayfs: idmapped layers are currently not supported
	[Oct25 10:21] overlayfs: idmapped layers are currently not supported
	[Oct25 10:22] overlayfs: idmapped layers are currently not supported
	[ +31.000692] overlayfs: idmapped layers are currently not supported
	[Oct25 10:24] overlayfs: idmapped layers are currently not supported
	[Oct25 10:26] overlayfs: idmapped layers are currently not supported
	[Oct25 10:27] overlayfs: idmapped layers are currently not supported
	[Oct25 10:28] overlayfs: idmapped layers are currently not supported
	[ +15.077774] overlayfs: idmapped layers are currently not supported
	[Oct25 10:29] overlayfs: idmapped layers are currently not supported
	[Oct25 10:30] overlayfs: idmapped layers are currently not supported
	[Oct25 10:32] overlayfs: idmapped layers are currently not supported
	[ +37.840172] overlayfs: idmapped layers are currently not supported
	[Oct25 10:33] overlayfs: idmapped layers are currently not supported
	[Oct25 10:34] overlayfs: idmapped layers are currently not supported
	[ +38.146630] overlayfs: idmapped layers are currently not supported
	[Oct25 10:35] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [537eacf207db5cef4aa2511198450cc751c37c67b3c423b415e092f106608c2d] <==
	{"level":"warn","ts":"2025-10-25T10:35:19.323728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:19.350552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:19.380119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:19.405411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:19.430229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:19.448971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:19.475754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:19.509996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:19.564728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:19.616457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:19.666623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:19.694609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:19.710825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:19.728690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:19.747685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:19.782533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:19.812334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:19.828190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:19.853643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:19.867873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:19.897458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:19.923783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:19.938518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:19.961749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:20.036240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56026","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:36:01 up  2:18,  0 user,  load average: 4.65, 3.84, 3.27
	Linux no-preload-768303 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a2bcd2be1f6429fecef079218c6066465b9e5aeeef496bca65e90fe65943921a] <==
	I1025 10:35:35.984026       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:35:36.075222       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 10:35:36.075399       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:35:36.075461       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:35:36.075485       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:35:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:35:36.278313       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:35:36.278537       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:35:36.278600       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:35:36.278753       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1025 10:35:36.479407       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:35:36.479446       1 metrics.go:72] Registering metrics
	I1025 10:35:36.479509       1 controller.go:711] "Syncing nftables rules"
	I1025 10:35:46.285661       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:35:46.285713       1 main.go:301] handling current node
	I1025 10:35:56.277816       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:35:56.277859       1 main.go:301] handling current node
	
	
	==> kube-apiserver [232cc73f406853ef29461e43e3cb84e30e83d696bcdafa9d92708ebc258698c9] <==
	I1025 10:35:21.199666       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 10:35:21.200981       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:35:21.201690       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1025 10:35:21.228776       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:35:21.233365       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1025 10:35:21.269021       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:35:21.270444       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 10:35:21.838898       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 10:35:21.846008       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 10:35:21.846031       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:35:23.093021       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:35:23.221477       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:35:23.457176       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 10:35:23.512735       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1025 10:35:23.514229       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:35:23.552338       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:35:24.114720       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:35:24.241405       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:35:24.265940       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 10:35:24.333153       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 10:35:29.912014       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:35:29.917266       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:35:29.968882       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:35:30.162620       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1025 10:35:58.921153       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:36284: use of closed network connection
	
	
	==> kube-controller-manager [827fe6ea182de3da352d15f379ee9b0fee81d801140d39ff7c8d239e2fc14fc4] <==
	I1025 10:35:29.205559       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:35:29.205635       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 10:35:29.205689       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 10:35:29.205878       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 10:35:29.207628       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1025 10:35:29.207680       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 10:35:29.207766       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1025 10:35:29.207839       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1025 10:35:29.209777       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1025 10:35:29.217449       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1025 10:35:29.222552       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 10:35:29.223063       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 10:35:29.227030       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 10:35:29.227125       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 10:35:29.227301       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-768303"
	I1025 10:35:29.227351       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1025 10:35:29.233534       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:35:29.234683       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 10:35:29.250104       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 10:35:29.250364       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 10:35:29.251687       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 10:35:29.255957       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1025 10:35:29.258094       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:35:29.264903       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:35:49.231016       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [9fcf673db7accdf6c52951e29edd4460e4001360806b8006b2f238719ae56126] <==
	I1025 10:35:31.137114       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:35:31.248967       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:35:31.350121       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:35:31.350154       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1025 10:35:31.350230       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:35:31.523550       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:35:31.523604       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:35:31.549817       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:35:31.550102       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:35:31.550116       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:35:31.559824       1 config.go:200] "Starting service config controller"
	I1025 10:35:31.559849       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:35:31.572719       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:35:31.572742       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:35:31.572767       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:35:31.572772       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:35:31.576497       1 config.go:309] "Starting node config controller"
	I1025 10:35:31.576516       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:35:31.576523       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:35:31.660198       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 10:35:31.673438       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 10:35:31.673470       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9bac2895724e4819a0c16575edefad3c5e635404909b0e9bacf91906691edaae] <==
	E1025 10:35:21.159581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 10:35:21.159666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 10:35:21.161181       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 10:35:21.161334       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 10:35:21.161901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 10:35:21.170146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1025 10:35:21.171321       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 10:35:21.993603       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 10:35:22.024767       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 10:35:22.083110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 10:35:22.087646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1025 10:35:22.140508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 10:35:22.247030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 10:35:22.274136       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 10:35:22.280675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 10:35:22.322090       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1025 10:35:22.323798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 10:35:22.334411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 10:35:22.372263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 10:35:22.422573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 10:35:22.490711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 10:35:22.624623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 10:35:22.633646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 10:35:22.648534       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1025 10:35:24.986570       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:35:25 no-preload-768303 kubelet[1998]: I1025 10:35:25.678520    1998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-768303" podStartSLOduration=0.678492583 podStartE2EDuration="678.492583ms" podCreationTimestamp="2025-10-25 10:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:35:25.654231455 +0000 UTC m=+1.540867200" watchObservedRunningTime="2025-10-25 10:35:25.678492583 +0000 UTC m=+1.565128328"
	Oct 25 10:35:29 no-preload-768303 kubelet[1998]: I1025 10:35:29.231518    1998 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 25 10:35:29 no-preload-768303 kubelet[1998]: I1025 10:35:29.232267    1998 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 25 10:35:30 no-preload-768303 kubelet[1998]: I1025 10:35:30.277044    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2844e492-0201-4963-9c6c-74f19df0adea-cni-cfg\") pod \"kindnet-gkbg7\" (UID: \"2844e492-0201-4963-9c6c-74f19df0adea\") " pod="kube-system/kindnet-gkbg7"
	Oct 25 10:35:30 no-preload-768303 kubelet[1998]: I1025 10:35:30.277097    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2844e492-0201-4963-9c6c-74f19df0adea-xtables-lock\") pod \"kindnet-gkbg7\" (UID: \"2844e492-0201-4963-9c6c-74f19df0adea\") " pod="kube-system/kindnet-gkbg7"
	Oct 25 10:35:30 no-preload-768303 kubelet[1998]: I1025 10:35:30.277121    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2844e492-0201-4963-9c6c-74f19df0adea-lib-modules\") pod \"kindnet-gkbg7\" (UID: \"2844e492-0201-4963-9c6c-74f19df0adea\") " pod="kube-system/kindnet-gkbg7"
	Oct 25 10:35:30 no-preload-768303 kubelet[1998]: I1025 10:35:30.277144    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6qz6\" (UniqueName: \"kubernetes.io/projected/2844e492-0201-4963-9c6c-74f19df0adea-kube-api-access-w6qz6\") pod \"kindnet-gkbg7\" (UID: \"2844e492-0201-4963-9c6c-74f19df0adea\") " pod="kube-system/kindnet-gkbg7"
	Oct 25 10:35:30 no-preload-768303 kubelet[1998]: I1025 10:35:30.378513    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d1ef05c2-0d0d-43f8-9bb8-f77839881a24-kube-proxy\") pod \"kube-proxy-m9bnn\" (UID: \"d1ef05c2-0d0d-43f8-9bb8-f77839881a24\") " pod="kube-system/kube-proxy-m9bnn"
	Oct 25 10:35:30 no-preload-768303 kubelet[1998]: I1025 10:35:30.378612    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1ef05c2-0d0d-43f8-9bb8-f77839881a24-lib-modules\") pod \"kube-proxy-m9bnn\" (UID: \"d1ef05c2-0d0d-43f8-9bb8-f77839881a24\") " pod="kube-system/kube-proxy-m9bnn"
	Oct 25 10:35:30 no-preload-768303 kubelet[1998]: I1025 10:35:30.378642    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdjzc\" (UniqueName: \"kubernetes.io/projected/d1ef05c2-0d0d-43f8-9bb8-f77839881a24-kube-api-access-kdjzc\") pod \"kube-proxy-m9bnn\" (UID: \"d1ef05c2-0d0d-43f8-9bb8-f77839881a24\") " pod="kube-system/kube-proxy-m9bnn"
	Oct 25 10:35:30 no-preload-768303 kubelet[1998]: I1025 10:35:30.378869    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d1ef05c2-0d0d-43f8-9bb8-f77839881a24-xtables-lock\") pod \"kube-proxy-m9bnn\" (UID: \"d1ef05c2-0d0d-43f8-9bb8-f77839881a24\") " pod="kube-system/kube-proxy-m9bnn"
	Oct 25 10:35:30 no-preload-768303 kubelet[1998]: I1025 10:35:30.400135    1998 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 25 10:35:31 no-preload-768303 kubelet[1998]: I1025 10:35:31.551781    1998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m9bnn" podStartSLOduration=1.551760405 podStartE2EDuration="1.551760405s" podCreationTimestamp="2025-10-25 10:35:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:35:31.541824408 +0000 UTC m=+7.428460251" watchObservedRunningTime="2025-10-25 10:35:31.551760405 +0000 UTC m=+7.438396150"
	Oct 25 10:35:46 no-preload-768303 kubelet[1998]: I1025 10:35:46.644520    1998 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 25 10:35:46 no-preload-768303 kubelet[1998]: I1025 10:35:46.688697    1998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-gkbg7" podStartSLOduration=11.596526145 podStartE2EDuration="16.688678274s" podCreationTimestamp="2025-10-25 10:35:30 +0000 UTC" firstStartedPulling="2025-10-25 10:35:30.727556124 +0000 UTC m=+6.614191877" lastFinishedPulling="2025-10-25 10:35:35.819708261 +0000 UTC m=+11.706344006" observedRunningTime="2025-10-25 10:35:36.557727261 +0000 UTC m=+12.444363145" watchObservedRunningTime="2025-10-25 10:35:46.688678274 +0000 UTC m=+22.575314027"
	Oct 25 10:35:46 no-preload-768303 kubelet[1998]: I1025 10:35:46.762769    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2q8t\" (UniqueName: \"kubernetes.io/projected/89da7f26-c2be-43b2-817c-6c2621a97a30-kube-api-access-r2q8t\") pod \"storage-provisioner\" (UID: \"89da7f26-c2be-43b2-817c-6c2621a97a30\") " pod="kube-system/storage-provisioner"
	Oct 25 10:35:46 no-preload-768303 kubelet[1998]: I1025 10:35:46.762829    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0aaecbc4-29e5-45e1-ad80-b2465476ab96-config-volume\") pod \"coredns-66bc5c9577-xpwdq\" (UID: \"0aaecbc4-29e5-45e1-ad80-b2465476ab96\") " pod="kube-system/coredns-66bc5c9577-xpwdq"
	Oct 25 10:35:46 no-preload-768303 kubelet[1998]: I1025 10:35:46.762856    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/89da7f26-c2be-43b2-817c-6c2621a97a30-tmp\") pod \"storage-provisioner\" (UID: \"89da7f26-c2be-43b2-817c-6c2621a97a30\") " pod="kube-system/storage-provisioner"
	Oct 25 10:35:46 no-preload-768303 kubelet[1998]: I1025 10:35:46.762888    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrh9p\" (UniqueName: \"kubernetes.io/projected/0aaecbc4-29e5-45e1-ad80-b2465476ab96-kube-api-access-zrh9p\") pod \"coredns-66bc5c9577-xpwdq\" (UID: \"0aaecbc4-29e5-45e1-ad80-b2465476ab96\") " pod="kube-system/coredns-66bc5c9577-xpwdq"
	Oct 25 10:35:47 no-preload-768303 kubelet[1998]: W1025 10:35:47.067877    1998 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9b0b6c2f298a351a13b7790ae51ec5d870ae3d00b4490d4a3b094eb081d986c1/crio-f701af78b1515126ba43cea68c6458e2df31ca10463ec9d68f09f384f9b0beeb WatchSource:0}: Error finding container f701af78b1515126ba43cea68c6458e2df31ca10463ec9d68f09f384f9b0beeb: Status 404 returned error can't find the container with id f701af78b1515126ba43cea68c6458e2df31ca10463ec9d68f09f384f9b0beeb
	Oct 25 10:35:47 no-preload-768303 kubelet[1998]: I1025 10:35:47.601076    1998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.601054515 podStartE2EDuration="15.601054515s" podCreationTimestamp="2025-10-25 10:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:35:47.584907629 +0000 UTC m=+23.471543374" watchObservedRunningTime="2025-10-25 10:35:47.601054515 +0000 UTC m=+23.487690260"
	Oct 25 10:35:49 no-preload-768303 kubelet[1998]: I1025 10:35:49.778335    1998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-xpwdq" podStartSLOduration=19.778315236 podStartE2EDuration="19.778315236s" podCreationTimestamp="2025-10-25 10:35:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:35:47.602251449 +0000 UTC m=+23.488887218" watchObservedRunningTime="2025-10-25 10:35:49.778315236 +0000 UTC m=+25.664950981"
	Oct 25 10:35:49 no-preload-768303 kubelet[1998]: I1025 10:35:49.888471    1998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b2lp\" (UniqueName: \"kubernetes.io/projected/d33e33c4-4af4-48a5-94f1-bc1b25bbdda6-kube-api-access-6b2lp\") pod \"busybox\" (UID: \"d33e33c4-4af4-48a5-94f1-bc1b25bbdda6\") " pod="default/busybox"
	Oct 25 10:35:50 no-preload-768303 kubelet[1998]: W1025 10:35:50.122259    1998 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9b0b6c2f298a351a13b7790ae51ec5d870ae3d00b4490d4a3b094eb081d986c1/crio-fcf767eec75c06a46a1f297e788f2be66fb4ead1bb0b7909d96422c3a5919e9f WatchSource:0}: Error finding container fcf767eec75c06a46a1f297e788f2be66fb4ead1bb0b7909d96422c3a5919e9f: Status 404 returned error can't find the container with id fcf767eec75c06a46a1f297e788f2be66fb4ead1bb0b7909d96422c3a5919e9f
	Oct 25 10:35:52 no-preload-768303 kubelet[1998]: I1025 10:35:52.597163    1998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.393071739 podStartE2EDuration="3.597143465s" podCreationTimestamp="2025-10-25 10:35:49 +0000 UTC" firstStartedPulling="2025-10-25 10:35:50.129342376 +0000 UTC m=+26.015978121" lastFinishedPulling="2025-10-25 10:35:52.333414102 +0000 UTC m=+28.220049847" observedRunningTime="2025-10-25 10:35:52.596489038 +0000 UTC m=+28.483124783" watchObservedRunningTime="2025-10-25 10:35:52.597143465 +0000 UTC m=+28.483779218"
	
	
	==> storage-provisioner [8bda5f61ab21a670ca266564ddb5d44b836eddd29243d445cd831b074e51ec2b] <==
	I1025 10:35:47.184460       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1025 10:35:47.235790       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 10:35:47.235913       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 10:35:47.255220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:35:47.297869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 10:35:47.298134       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 10:35:47.298732       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-768303_ac5cafe5-3b90-43d7-a536-962b4d50607c!
	I1025 10:35:47.298586       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"911c92e5-c16f-402a-9e0d-e46ef78d17f2", APIVersion:"v1", ResourceVersion:"458", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-768303_ac5cafe5-3b90-43d7-a536-962b4d50607c became leader
	W1025 10:35:47.316725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:35:47.337033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 10:35:47.399205       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-768303_ac5cafe5-3b90-43d7-a536-962b4d50607c!
	W1025 10:35:49.340869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:35:49.348047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:35:51.351622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:35:51.358490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:35:53.362336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:35:53.367234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:35:55.372167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:35:55.379074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:35:57.382377       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:35:57.388539       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:35:59.391843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:35:59.400822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:36:01.407485       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:36:01.420533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-768303 -n no-preload-768303
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-768303 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-491554 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-491554 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (317.850744ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:36:03Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-491554 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-491554
helpers_test.go:243: (dbg) docker inspect newest-cni-491554:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3a1d576c3602867853477f035a9ea60cae7d92cb1d6d6f0519a5f74f9b275216",
	        "Created": "2025-10-25T10:35:24.032490574Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 496550,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:35:24.09892115Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/3a1d576c3602867853477f035a9ea60cae7d92cb1d6d6f0519a5f74f9b275216/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3a1d576c3602867853477f035a9ea60cae7d92cb1d6d6f0519a5f74f9b275216/hostname",
	        "HostsPath": "/var/lib/docker/containers/3a1d576c3602867853477f035a9ea60cae7d92cb1d6d6f0519a5f74f9b275216/hosts",
	        "LogPath": "/var/lib/docker/containers/3a1d576c3602867853477f035a9ea60cae7d92cb1d6d6f0519a5f74f9b275216/3a1d576c3602867853477f035a9ea60cae7d92cb1d6d6f0519a5f74f9b275216-json.log",
	        "Name": "/newest-cni-491554",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-491554:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-491554",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3a1d576c3602867853477f035a9ea60cae7d92cb1d6d6f0519a5f74f9b275216",
	                "LowerDir": "/var/lib/docker/overlay2/de00d36f59cd60c2fcb113e5a127cd5718597e1a30c7904107c9cf639dfd903c-init/diff:/var/lib/docker/overlay2/56244b4f60319ba406f5ebe406da3f45bd5764c167111669ae2d84794cf94518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/de00d36f59cd60c2fcb113e5a127cd5718597e1a30c7904107c9cf639dfd903c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/de00d36f59cd60c2fcb113e5a127cd5718597e1a30c7904107c9cf639dfd903c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/de00d36f59cd60c2fcb113e5a127cd5718597e1a30c7904107c9cf639dfd903c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-491554",
	                "Source": "/var/lib/docker/volumes/newest-cni-491554/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-491554",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-491554",
	                "name.minikube.sigs.k8s.io": "newest-cni-491554",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9f79de00f86f7d7992531fe3bd37f897b6420527a30e2fe6b86785a5945a2731",
	            "SandboxKey": "/var/run/docker/netns/9f79de00f86f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-491554": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e2:e0:f8:95:45:48",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f83aa7d97dd61a3e183e8b61de27687f028a404822311667002b081cafdf7acf",
	                    "EndpointID": "5d1b02c1ccfc3f606d84cd2139436cb7b68eecfcfe748e2303dbbf3a69627108",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-491554",
	                        "3a1d576c3602"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-491554 -n newest-cni-491554
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-491554 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-491554 logs -n 25: (1.196193636s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p cert-expiration-313068 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-313068       │ jenkins │ v1.37.0 │ 25 Oct 25 10:31 UTC │ 25 Oct 25 10:32 UTC │
	│ delete  │ -p cert-expiration-313068                                                                                                                                                                                                                     │ cert-expiration-313068       │ jenkins │ v1.37.0 │ 25 Oct 25 10:32 UTC │ 25 Oct 25 10:32 UTC │
	│ start   │ -p embed-certs-419185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:32 UTC │ 25 Oct 25 10:33 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-204074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-204074 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │ 25 Oct 25 10:33 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-204074 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │ 25 Oct 25 10:33 UTC │
	│ start   │ -p default-k8s-diff-port-204074 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │ 25 Oct 25 10:34 UTC │
	│ addons  │ enable metrics-server -p embed-certs-419185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │                     │
	│ stop    │ -p embed-certs-419185 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │ 25 Oct 25 10:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-419185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ start   │ -p embed-certs-419185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ image   │ default-k8s-diff-port-204074 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ pause   │ -p default-k8s-diff-port-204074 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-204074                                                                                                                                                                                                               │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ delete  │ -p default-k8s-diff-port-204074                                                                                                                                                                                                               │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ delete  │ -p disable-driver-mounts-533631                                                                                                                                                                                                               │ disable-driver-mounts-533631 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ start   │ -p no-preload-768303 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-768303            │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:35 UTC │
	│ image   │ embed-certs-419185 image list --format=json                                                                                                                                                                                                   │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │ 25 Oct 25 10:35 UTC │
	│ pause   │ -p embed-certs-419185 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │                     │
	│ delete  │ -p embed-certs-419185                                                                                                                                                                                                                         │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │ 25 Oct 25 10:35 UTC │
	│ delete  │ -p embed-certs-419185                                                                                                                                                                                                                         │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │ 25 Oct 25 10:35 UTC │
	│ start   │ -p newest-cni-491554 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-491554            │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │ 25 Oct 25 10:36 UTC │
	│ addons  │ enable metrics-server -p no-preload-768303 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-768303            │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │                     │
	│ stop    │ -p no-preload-768303 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-768303            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-491554 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-491554            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:35:17
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:35:17.591860  496139 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:35:17.592429  496139 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:35:17.592443  496139 out.go:374] Setting ErrFile to fd 2...
	I1025 10:35:17.592448  496139 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:35:17.592831  496139 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 10:35:17.593399  496139 out.go:368] Setting JSON to false
	I1025 10:35:17.594390  496139 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8268,"bootTime":1761380250,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1025 10:35:17.594521  496139 start.go:141] virtualization:  
	I1025 10:35:17.598586  496139 out.go:179] * [newest-cni-491554] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:35:17.601963  496139 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 10:35:17.602006  496139 notify.go:220] Checking for updates...
	I1025 10:35:17.608563  496139 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:35:17.611685  496139 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:35:17.614626  496139 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-292167/.minikube
	I1025 10:35:17.618070  496139 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:35:17.620961  496139 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:35:17.624310  496139 config.go:182] Loaded profile config "no-preload-768303": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:35:17.624458  496139 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:35:17.671958  496139 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:35:17.672090  496139 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:35:17.796172  496139 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:60 SystemTime:2025-10-25 10:35:17.783297005 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:35:17.796297  496139 docker.go:318] overlay module found
	I1025 10:35:17.799455  496139 out.go:179] * Using the docker driver based on user configuration
	I1025 10:35:17.802323  496139 start.go:305] selected driver: docker
	I1025 10:35:17.802344  496139 start.go:925] validating driver "docker" against <nil>
	I1025 10:35:17.802373  496139 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:35:17.803074  496139 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:35:17.910349  496139 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:60 SystemTime:2025-10-25 10:35:17.899231758 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:35:17.910514  496139 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1025 10:35:17.910536  496139 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1025 10:35:17.910755  496139 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 10:35:17.913814  496139 out.go:179] * Using Docker driver with root privileges
	I1025 10:35:17.916795  496139 cni.go:84] Creating CNI manager for ""
	I1025 10:35:17.916872  496139 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:35:17.916885  496139 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 10:35:17.916960  496139 start.go:349] cluster config:
	{Name:newest-cni-491554 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-491554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:35:17.920200  496139 out.go:179] * Starting "newest-cni-491554" primary control-plane node in "newest-cni-491554" cluster
	I1025 10:35:17.923095  496139 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:35:17.926037  496139 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:35:17.928921  496139 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:35:17.928984  496139 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 10:35:17.928999  496139 cache.go:58] Caching tarball of preloaded images
	I1025 10:35:17.929084  496139 preload.go:233] Found /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:35:17.929100  496139 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:35:17.929224  496139 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/config.json ...
	I1025 10:35:17.929247  496139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/config.json: {Name:mk30af115cc70131ab70ab52b597c60671b064da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:35:17.929418  496139 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:35:17.957822  496139 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:35:17.957851  496139 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:35:17.957865  496139 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:35:17.957889  496139 start.go:360] acquireMachinesLock for newest-cni-491554: {Name:mk0633ca83cb1f39b8a26429220857914907c494 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:35:17.957997  496139 start.go:364] duration metric: took 86.303µs to acquireMachinesLock for "newest-cni-491554"
	I1025 10:35:17.958030  496139 start.go:93] Provisioning new machine with config: &{Name:newest-cni-491554 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-491554 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:35:17.958112  496139 start.go:125] createHost starting for "" (driver="docker")
	I1025 10:35:14.891635  492025 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 10:35:14.891759  492025 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 10:35:15.887723  492025 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001843151s
	I1025 10:35:15.892627  492025 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 10:35:15.892726  492025 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1025 10:35:15.893020  492025 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 10:35:15.893109  492025 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1025 10:35:19.360909  492025 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.467690796s
	I1025 10:35:17.961560  496139 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 10:35:17.961810  496139 start.go:159] libmachine.API.Create for "newest-cni-491554" (driver="docker")
	I1025 10:35:17.961859  496139 client.go:168] LocalClient.Create starting
	I1025 10:35:17.961946  496139 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem
	I1025 10:35:17.961986  496139 main.go:141] libmachine: Decoding PEM data...
	I1025 10:35:17.962002  496139 main.go:141] libmachine: Parsing certificate...
	I1025 10:35:17.962061  496139 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem
	I1025 10:35:17.962083  496139 main.go:141] libmachine: Decoding PEM data...
	I1025 10:35:17.962098  496139 main.go:141] libmachine: Parsing certificate...
	I1025 10:35:17.962468  496139 cli_runner.go:164] Run: docker network inspect newest-cni-491554 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 10:35:17.991312  496139 cli_runner.go:211] docker network inspect newest-cni-491554 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 10:35:17.991400  496139 network_create.go:284] running [docker network inspect newest-cni-491554] to gather additional debugging logs...
	I1025 10:35:17.991422  496139 cli_runner.go:164] Run: docker network inspect newest-cni-491554
	W1025 10:35:18.016422  496139 cli_runner.go:211] docker network inspect newest-cni-491554 returned with exit code 1
	I1025 10:35:18.016460  496139 network_create.go:287] error running [docker network inspect newest-cni-491554]: docker network inspect newest-cni-491554: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-491554 not found
	I1025 10:35:18.016487  496139 network_create.go:289] output of [docker network inspect newest-cni-491554]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-491554 not found
	
	** /stderr **
	I1025 10:35:18.016589  496139 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:35:18.051574  496139 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-101b69e1e09b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d2:49:dd:18:ab:21} reservation:<nil>}
	I1025 10:35:18.051881  496139 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fee160f0176f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:62:3b:42:f8:f5:b2} reservation:<nil>}
	I1025 10:35:18.052220  496139 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5368f13f34ad IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2e:30:83:3d:80:30} reservation:<nil>}
	I1025 10:35:18.052634  496139 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001977360}
	I1025 10:35:18.052655  496139 network_create.go:124] attempt to create docker network newest-cni-491554 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 10:35:18.052713  496139 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-491554 newest-cni-491554
	I1025 10:35:18.150078  496139 network_create.go:108] docker network newest-cni-491554 192.168.76.0/24 created
	I1025 10:35:18.150108  496139 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-491554" container
	I1025 10:35:18.150181  496139 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 10:35:18.170461  496139 cli_runner.go:164] Run: docker volume create newest-cni-491554 --label name.minikube.sigs.k8s.io=newest-cni-491554 --label created_by.minikube.sigs.k8s.io=true
	I1025 10:35:18.192165  496139 oci.go:103] Successfully created a docker volume newest-cni-491554
	I1025 10:35:18.192256  496139 cli_runner.go:164] Run: docker run --rm --name newest-cni-491554-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-491554 --entrypoint /usr/bin/test -v newest-cni-491554:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 10:35:18.922178  496139 oci.go:107] Successfully prepared a docker volume newest-cni-491554
	I1025 10:35:18.922230  496139 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:35:18.922250  496139 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 10:35:18.922328  496139 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-491554:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1025 10:35:21.180361  492025 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.287686538s
	I1025 10:35:23.394664  492025 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.501897785s
	I1025 10:35:23.457372  492025 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 10:35:23.511022  492025 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 10:35:23.595073  492025 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 10:35:23.595586  492025 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-768303 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 10:35:23.630459  492025 kubeadm.go:318] [bootstrap-token] Using token: c9xqcz.fi2iogmqoucis458
	I1025 10:35:23.656117  492025 out.go:252]   - Configuring RBAC rules ...
	I1025 10:35:23.656276  492025 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 10:35:23.674548  492025 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 10:35:23.690336  492025 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 10:35:23.714970  492025 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 10:35:23.720888  492025 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 10:35:23.728883  492025 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 10:35:23.805150  492025 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 10:35:24.268308  492025 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 10:35:24.816708  492025 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 10:35:24.816729  492025 kubeadm.go:318] 
	I1025 10:35:24.816810  492025 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 10:35:24.816816  492025 kubeadm.go:318] 
	I1025 10:35:24.816897  492025 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 10:35:24.816902  492025 kubeadm.go:318] 
	I1025 10:35:24.816928  492025 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 10:35:24.816989  492025 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 10:35:24.817042  492025 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 10:35:24.817047  492025 kubeadm.go:318] 
	I1025 10:35:24.817102  492025 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 10:35:24.817112  492025 kubeadm.go:318] 
	I1025 10:35:24.817162  492025 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 10:35:24.817167  492025 kubeadm.go:318] 
	I1025 10:35:24.817221  492025 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 10:35:24.817298  492025 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 10:35:24.817369  492025 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 10:35:24.817373  492025 kubeadm.go:318] 
	I1025 10:35:24.817461  492025 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 10:35:24.817576  492025 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 10:35:24.817583  492025 kubeadm.go:318] 
	I1025 10:35:24.817675  492025 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token c9xqcz.fi2iogmqoucis458 \
	I1025 10:35:24.817784  492025 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3fbde57ac93e639221cdfe118779e4c6254e9915ac4b07aafbe95a9f86bce8d0 \
	I1025 10:35:24.817806  492025 kubeadm.go:318] 	--control-plane 
	I1025 10:35:24.817811  492025 kubeadm.go:318] 
	I1025 10:35:24.817899  492025 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 10:35:24.817903  492025 kubeadm.go:318] 
	I1025 10:35:24.817988  492025 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token c9xqcz.fi2iogmqoucis458 \
	I1025 10:35:24.818094  492025 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3fbde57ac93e639221cdfe118779e4c6254e9915ac4b07aafbe95a9f86bce8d0 
	I1025 10:35:24.831135  492025 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1025 10:35:24.831385  492025 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1025 10:35:24.831494  492025 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 10:35:24.831511  492025 cni.go:84] Creating CNI manager for ""
	I1025 10:35:24.831519  492025 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:35:24.835277  492025 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 10:35:23.911777  496139 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-491554:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.989413369s)
	I1025 10:35:23.911808  496139 kic.go:203] duration metric: took 4.989554139s to extract preloaded images to volume ...
	W1025 10:35:23.911947  496139 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1025 10:35:23.912053  496139 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 10:35:24.016101  496139 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-491554 --name newest-cni-491554 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-491554 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-491554 --network newest-cni-491554 --ip 192.168.76.2 --volume newest-cni-491554:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 10:35:24.436537  496139 cli_runner.go:164] Run: docker container inspect newest-cni-491554 --format={{.State.Running}}
	I1025 10:35:24.465635  496139 cli_runner.go:164] Run: docker container inspect newest-cni-491554 --format={{.State.Status}}
	I1025 10:35:24.490039  496139 cli_runner.go:164] Run: docker exec newest-cni-491554 stat /var/lib/dpkg/alternatives/iptables
	I1025 10:35:24.546739  496139 oci.go:144] the created container "newest-cni-491554" has a running status.
	I1025 10:35:24.546774  496139 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21794-292167/.minikube/machines/newest-cni-491554/id_rsa...
	I1025 10:35:24.756411  496139 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21794-292167/.minikube/machines/newest-cni-491554/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 10:35:24.785491  496139 cli_runner.go:164] Run: docker container inspect newest-cni-491554 --format={{.State.Status}}
	I1025 10:35:24.818343  496139 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 10:35:24.818402  496139 kic_runner.go:114] Args: [docker exec --privileged newest-cni-491554 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 10:35:24.889465  496139 cli_runner.go:164] Run: docker container inspect newest-cni-491554 --format={{.State.Status}}
	I1025 10:35:24.927666  496139 machine.go:93] provisionDockerMachine start ...
	I1025 10:35:24.927765  496139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-491554
	I1025 10:35:24.959350  496139 main.go:141] libmachine: Using SSH client type: native
	I1025 10:35:24.959702  496139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33457 <nil> <nil>}
	I1025 10:35:24.959715  496139 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:35:24.961983  496139 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41104->127.0.0.1:33457: read: connection reset by peer
	I1025 10:35:24.838955  492025 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 10:35:24.848564  492025 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 10:35:24.848584  492025 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 10:35:24.907979  492025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 10:35:25.593910  492025 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 10:35:25.594047  492025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:35:25.594119  492025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-768303 minikube.k8s.io/updated_at=2025_10_25T10_35_25_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53 minikube.k8s.io/name=no-preload-768303 minikube.k8s.io/primary=true
	I1025 10:35:25.877364  492025 ops.go:34] apiserver oom_adj: -16
	I1025 10:35:25.877471  492025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:35:26.377813  492025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:35:26.878571  492025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:35:27.378436  492025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:35:27.878570  492025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:35:28.377588  492025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:35:28.877566  492025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:35:29.377580  492025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:35:29.878459  492025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:35:30.378179  492025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:35:30.552245  492025 kubeadm.go:1113] duration metric: took 4.958242676s to wait for elevateKubeSystemPrivileges
	I1025 10:35:30.552279  492025 kubeadm.go:402] duration metric: took 24.308197084s to StartCluster
	I1025 10:35:30.552299  492025 settings.go:142] acquiring lock: {Name:mkea67a33832f2ea491a4c745ccd836174c61655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:35:30.552369  492025 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:35:30.553012  492025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/kubeconfig: {Name:mk3dba620aff9c31a3afd43d9db4d9ff5be75367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:35:30.553228  492025 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:35:30.553317  492025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 10:35:30.553559  492025 config.go:182] Loaded profile config "no-preload-768303": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:35:30.553591  492025 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:35:30.553652  492025 addons.go:69] Setting storage-provisioner=true in profile "no-preload-768303"
	I1025 10:35:30.553665  492025 addons.go:238] Setting addon storage-provisioner=true in "no-preload-768303"
	I1025 10:35:30.553687  492025 host.go:66] Checking if "no-preload-768303" exists ...
	I1025 10:35:30.554173  492025 cli_runner.go:164] Run: docker container inspect no-preload-768303 --format={{.State.Status}}
	I1025 10:35:30.554833  492025 addons.go:69] Setting default-storageclass=true in profile "no-preload-768303"
	I1025 10:35:30.554854  492025 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-768303"
	I1025 10:35:30.555131  492025 cli_runner.go:164] Run: docker container inspect no-preload-768303 --format={{.State.Status}}
	I1025 10:35:30.557512  492025 out.go:179] * Verifying Kubernetes components...
	I1025 10:35:30.562952  492025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:35:30.595845  492025 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:35:28.119005  496139 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-491554
	
	I1025 10:35:28.119033  496139 ubuntu.go:182] provisioning hostname "newest-cni-491554"
	I1025 10:35:28.119097  496139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-491554
	I1025 10:35:28.136646  496139 main.go:141] libmachine: Using SSH client type: native
	I1025 10:35:28.136961  496139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33457 <nil> <nil>}
	I1025 10:35:28.137067  496139 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-491554 && echo "newest-cni-491554" | sudo tee /etc/hostname
	I1025 10:35:28.299633  496139 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-491554
	
	I1025 10:35:28.299730  496139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-491554
	I1025 10:35:28.318561  496139 main.go:141] libmachine: Using SSH client type: native
	I1025 10:35:28.318851  496139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33457 <nil> <nil>}
	I1025 10:35:28.318867  496139 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-491554' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-491554/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-491554' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:35:28.475704  496139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:35:28.475733  496139 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-292167/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-292167/.minikube}
	I1025 10:35:28.475762  496139 ubuntu.go:190] setting up certificates
	I1025 10:35:28.475786  496139 provision.go:84] configureAuth start
	I1025 10:35:28.475855  496139 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-491554
	I1025 10:35:28.500918  496139 provision.go:143] copyHostCerts
	I1025 10:35:28.500989  496139 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem, removing ...
	I1025 10:35:28.501002  496139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem
	I1025 10:35:28.501093  496139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem (1082 bytes)
	I1025 10:35:28.501203  496139 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem, removing ...
	I1025 10:35:28.501216  496139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem
	I1025 10:35:28.501246  496139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem (1123 bytes)
	I1025 10:35:28.501320  496139 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem, removing ...
	I1025 10:35:28.501330  496139 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem
	I1025 10:35:28.501356  496139 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem (1675 bytes)
	I1025 10:35:28.501424  496139 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem org=jenkins.newest-cni-491554 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-491554]
	I1025 10:35:29.112188  496139 provision.go:177] copyRemoteCerts
	I1025 10:35:29.112262  496139 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:35:29.112306  496139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-491554
	I1025 10:35:29.130713  496139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33457 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/newest-cni-491554/id_rsa Username:docker}
	I1025 10:35:29.243583  496139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 10:35:29.264745  496139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 10:35:29.285192  496139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1025 10:35:29.305548  496139 provision.go:87] duration metric: took 829.739078ms to configureAuth
	I1025 10:35:29.305621  496139 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:35:29.305853  496139 config.go:182] Loaded profile config "newest-cni-491554": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:35:29.305973  496139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-491554
	I1025 10:35:29.323232  496139 main.go:141] libmachine: Using SSH client type: native
	I1025 10:35:29.323713  496139 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33457 <nil> <nil>}
	I1025 10:35:29.323754  496139 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:35:29.621453  496139 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:35:29.621479  496139 machine.go:96] duration metric: took 4.693793086s to provisionDockerMachine
	I1025 10:35:29.621490  496139 client.go:171] duration metric: took 11.65962061s to LocalClient.Create
	I1025 10:35:29.621508  496139 start.go:167] duration metric: took 11.659700045s to libmachine.API.Create "newest-cni-491554"
	I1025 10:35:29.621516  496139 start.go:293] postStartSetup for "newest-cni-491554" (driver="docker")
	I1025 10:35:29.621535  496139 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:35:29.621609  496139 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:35:29.621661  496139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-491554
	I1025 10:35:29.645657  496139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33457 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/newest-cni-491554/id_rsa Username:docker}
	I1025 10:35:29.756043  496139 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:35:29.759261  496139 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:35:29.759290  496139 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:35:29.759307  496139 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/addons for local assets ...
	I1025 10:35:29.759361  496139 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/files for local assets ...
	I1025 10:35:29.759444  496139 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem -> 2940172.pem in /etc/ssl/certs
	I1025 10:35:29.759553  496139 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:35:29.767655  496139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 10:35:29.788585  496139 start.go:296] duration metric: took 167.052865ms for postStartSetup
	I1025 10:35:29.788971  496139 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-491554
	I1025 10:35:29.815581  496139 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/config.json ...
	I1025 10:35:29.815868  496139 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:35:29.815922  496139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-491554
	I1025 10:35:29.836737  496139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33457 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/newest-cni-491554/id_rsa Username:docker}
	I1025 10:35:29.947245  496139 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:35:29.953882  496139 start.go:128] duration metric: took 11.995754247s to createHost
	I1025 10:35:29.953911  496139 start.go:83] releasing machines lock for "newest-cni-491554", held for 11.995898199s
	I1025 10:35:29.953988  496139 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-491554
	I1025 10:35:29.978333  496139 ssh_runner.go:195] Run: cat /version.json
	I1025 10:35:29.978387  496139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-491554
	I1025 10:35:29.978408  496139 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:35:29.978468  496139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-491554
	I1025 10:35:30.052866  496139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33457 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/newest-cni-491554/id_rsa Username:docker}
	I1025 10:35:30.068194  496139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33457 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/newest-cni-491554/id_rsa Username:docker}
	I1025 10:35:30.196378  496139 ssh_runner.go:195] Run: systemctl --version
	I1025 10:35:30.326923  496139 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:35:30.389315  496139 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:35:30.398713  496139 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:35:30.398783  496139 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:35:30.437805  496139 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1025 10:35:30.437891  496139 start.go:495] detecting cgroup driver to use...
	I1025 10:35:30.437961  496139 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:35:30.438048  496139 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:35:30.468740  496139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:35:30.485449  496139 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:35:30.485561  496139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:35:30.509982  496139 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:35:30.531298  496139 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:35:30.835245  496139 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:35:31.062579  496139 docker.go:234] disabling docker service ...
	I1025 10:35:31.062646  496139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:35:31.102501  496139 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:35:31.126074  496139 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:35:31.345827  496139 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:35:31.588885  496139 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:35:31.613386  496139 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:35:31.648271  496139 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:35:31.648362  496139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:35:31.661363  496139 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:35:31.661432  496139 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:35:31.681547  496139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:35:31.691381  496139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:35:31.705660  496139 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:35:31.714668  496139 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:35:31.725971  496139 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:35:31.748519  496139 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:35:31.760644  496139 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:35:31.771397  496139 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:35:31.781756  496139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:35:31.985290  496139 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:35:32.176876  496139 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:35:32.176950  496139 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:35:32.185497  496139 start.go:563] Will wait 60s for crictl version
	I1025 10:35:32.185575  496139 ssh_runner.go:195] Run: which crictl
	I1025 10:35:32.189580  496139 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:35:32.234289  496139 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:35:32.234384  496139 ssh_runner.go:195] Run: crio --version
	I1025 10:35:32.288207  496139 ssh_runner.go:195] Run: crio --version
	I1025 10:35:32.343418  496139 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:35:32.345571  496139 cli_runner.go:164] Run: docker network inspect newest-cni-491554 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:35:32.374503  496139 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1025 10:35:32.378685  496139 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:35:32.398584  496139 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1025 10:35:32.401305  496139 kubeadm.go:883] updating cluster {Name:newest-cni-491554 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-491554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:35:32.401432  496139 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:35:32.401518  496139 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:35:32.454728  496139 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:35:32.454752  496139 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:35:32.454810  496139 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:35:32.513414  496139 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:35:32.513438  496139 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:35:32.513446  496139 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1025 10:35:32.513536  496139 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-491554 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-491554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:35:32.513628  496139 ssh_runner.go:195] Run: crio config
	I1025 10:35:30.598253  492025 addons.go:238] Setting addon default-storageclass=true in "no-preload-768303"
	I1025 10:35:30.598299  492025 host.go:66] Checking if "no-preload-768303" exists ...
	I1025 10:35:30.598739  492025 cli_runner.go:164] Run: docker container inspect no-preload-768303 --format={{.State.Status}}
	I1025 10:35:30.598918  492025 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:35:30.598935  492025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:35:30.598987  492025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:35:30.659052  492025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33452 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/no-preload-768303/id_rsa Username:docker}
	I1025 10:35:30.661715  492025 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:35:30.661739  492025 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:35:30.661808  492025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:35:30.695416  492025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33452 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/no-preload-768303/id_rsa Username:docker}
	I1025 10:35:31.088653  492025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:35:31.144865  492025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 10:35:31.144985  492025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:35:31.175208  492025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:35:32.291296  492025 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.146395019s)
	I1025 10:35:32.291321  492025 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1025 10:35:32.292250  492025 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.147249261s)
	I1025 10:35:32.292866  492025 node_ready.go:35] waiting up to 6m0s for node "no-preload-768303" to be "Ready" ...
	I1025 10:35:32.799270  492025 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-768303" context rescaled to 1 replicas
	I1025 10:35:32.841641  492025 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.666346057s)
	I1025 10:35:32.844704  492025 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1025 10:35:32.847637  492025 addons.go:514] duration metric: took 2.294021528s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1025 10:35:34.297688  492025 node_ready.go:57] node "no-preload-768303" has "Ready":"False" status (will retry)
	I1025 10:35:32.597881  496139 cni.go:84] Creating CNI manager for ""
	I1025 10:35:32.597900  496139 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:35:32.597922  496139 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1025 10:35:32.597946  496139 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-491554 NodeName:newest-cni-491554 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:35:32.598066  496139 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-491554"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:35:32.598133  496139 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:35:32.607593  496139 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:35:32.607663  496139 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:35:32.616697  496139 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1025 10:35:32.637488  496139 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:35:32.659407  496139 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1025 10:35:32.682289  496139 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:35:32.687442  496139 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:35:32.699233  496139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:35:32.897164  496139 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:35:32.938964  496139 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554 for IP: 192.168.76.2
	I1025 10:35:32.938984  496139 certs.go:195] generating shared ca certs ...
	I1025 10:35:32.939001  496139 certs.go:227] acquiring lock for ca certs: {Name:mk9438893ca511bbb3aa6154ee7b6f94b409696d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:35:32.939218  496139 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key
	I1025 10:35:32.939285  496139 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key
	I1025 10:35:32.939299  496139 certs.go:257] generating profile certs ...
	I1025 10:35:32.939420  496139 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/client.key
	I1025 10:35:32.939446  496139 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/client.crt with IP's: []
	I1025 10:35:34.216920  496139 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/client.crt ...
	I1025 10:35:34.216950  496139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/client.crt: {Name:mk512ce90ddbdbbfd5ecabfbda6bc1400fb538c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:35:34.217112  496139 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/client.key ...
	I1025 10:35:34.217129  496139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/client.key: {Name:mk37698c313a90d602b9cd8e52667fe080d096e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:35:34.217225  496139 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/apiserver.key.1df2bdda
	I1025 10:35:34.217243  496139 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/apiserver.crt.1df2bdda with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1025 10:35:34.922846  496139 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/apiserver.crt.1df2bdda ...
	I1025 10:35:34.922878  496139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/apiserver.crt.1df2bdda: {Name:mk0bc9ab90fa8bde62384ac873795799edbe0266 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:35:34.923114  496139 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/apiserver.key.1df2bdda ...
	I1025 10:35:34.923132  496139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/apiserver.key.1df2bdda: {Name:mka83abd3b7d52bb94c96307e96f984b99cd06e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:35:34.923258  496139 certs.go:382] copying /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/apiserver.crt.1df2bdda -> /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/apiserver.crt
	I1025 10:35:34.923344  496139 certs.go:386] copying /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/apiserver.key.1df2bdda -> /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/apiserver.key
	I1025 10:35:34.923409  496139 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/proxy-client.key
	I1025 10:35:34.923430  496139 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/proxy-client.crt with IP's: []
	I1025 10:35:35.371774  496139 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/proxy-client.crt ...
	I1025 10:35:35.371806  496139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/proxy-client.crt: {Name:mk7daa5b71a10a3820810a893d97f214371b9594 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:35:35.371974  496139 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/proxy-client.key ...
	I1025 10:35:35.372000  496139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/proxy-client.key: {Name:mk2248c415d6104d54a2a78442edd92357c31ab8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:35:35.372186  496139 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem (1338 bytes)
	W1025 10:35:35.372233  496139 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017_empty.pem, impossibly tiny 0 bytes
	I1025 10:35:35.372247  496139 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 10:35:35.372273  496139 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem (1082 bytes)
	I1025 10:35:35.372299  496139 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:35:35.372326  496139 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem (1675 bytes)
	I1025 10:35:35.372382  496139 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 10:35:35.373007  496139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:35:35.395988  496139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:35:35.416733  496139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:35:35.437607  496139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:35:35.460605  496139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 10:35:35.480944  496139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 10:35:35.501021  496139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:35:35.520082  496139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:35:35.539594  496139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /usr/share/ca-certificates/2940172.pem (1708 bytes)
	I1025 10:35:35.559047  496139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:35:35.578295  496139 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem --> /usr/share/ca-certificates/294017.pem (1338 bytes)
	I1025 10:35:35.598792  496139 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:35:35.616514  496139 ssh_runner.go:195] Run: openssl version
	I1025 10:35:35.623770  496139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294017.pem && ln -fs /usr/share/ca-certificates/294017.pem /etc/ssl/certs/294017.pem"
	I1025 10:35:35.636912  496139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294017.pem
	I1025 10:35:35.641643  496139 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:39 /usr/share/ca-certificates/294017.pem
	I1025 10:35:35.641722  496139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294017.pem
	I1025 10:35:35.688998  496139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294017.pem /etc/ssl/certs/51391683.0"
	I1025 10:35:35.699506  496139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940172.pem && ln -fs /usr/share/ca-certificates/2940172.pem /etc/ssl/certs/2940172.pem"
	I1025 10:35:35.713048  496139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940172.pem
	I1025 10:35:35.717115  496139 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:39 /usr/share/ca-certificates/2940172.pem
	I1025 10:35:35.717202  496139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940172.pem
	I1025 10:35:35.760659  496139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940172.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:35:35.769336  496139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:35:35.777898  496139 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:35:35.782317  496139 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:35:35.782378  496139 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:35:35.830779  496139 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:35:35.839573  496139 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:35:35.845539  496139 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 10:35:35.845586  496139 kubeadm.go:400] StartCluster: {Name:newest-cni-491554 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-491554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:35:35.845653  496139 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:35:35.845726  496139 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:35:35.907251  496139 cri.go:89] found id: ""
	I1025 10:35:35.907409  496139 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:35:35.914963  496139 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 10:35:35.923258  496139 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 10:35:35.923374  496139 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 10:35:35.934564  496139 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 10:35:35.934634  496139 kubeadm.go:157] found existing configuration files:
	
	I1025 10:35:35.934723  496139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 10:35:35.942134  496139 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 10:35:35.942248  496139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 10:35:35.949675  496139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 10:35:35.958432  496139 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 10:35:35.958570  496139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 10:35:35.967060  496139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 10:35:35.975002  496139 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 10:35:35.975114  496139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 10:35:35.983616  496139 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 10:35:35.991367  496139 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 10:35:35.991438  496139 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 10:35:35.999385  496139 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 10:35:36.046548  496139 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 10:35:36.046735  496139 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 10:35:36.072203  496139 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 10:35:36.072324  496139 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1025 10:35:36.072411  496139 kubeadm.go:318] OS: Linux
	I1025 10:35:36.072490  496139 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 10:35:36.072573  496139 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1025 10:35:36.072653  496139 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 10:35:36.072738  496139 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 10:35:36.072820  496139 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 10:35:36.072904  496139 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 10:35:36.072984  496139 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 10:35:36.073070  496139 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 10:35:36.073153  496139 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1025 10:35:36.150331  496139 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 10:35:36.150455  496139 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 10:35:36.150555  496139 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 10:35:36.158861  496139 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1025 10:35:36.164348  496139 out.go:252]   - Generating certificates and keys ...
	I1025 10:35:36.164447  496139 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 10:35:36.164520  496139 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 10:35:36.224149  496139 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 10:35:36.448598  496139 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 10:35:36.905226  496139 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	W1025 10:35:36.312175  492025 node_ready.go:57] node "no-preload-768303" has "Ready":"False" status (will retry)
	W1025 10:35:38.797795  492025 node_ready.go:57] node "no-preload-768303" has "Ready":"False" status (will retry)
	I1025 10:35:38.088533  496139 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 10:35:38.503715  496139 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 10:35:38.504221  496139 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-491554] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 10:35:38.758714  496139 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 10:35:38.759019  496139 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-491554] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 10:35:39.322166  496139 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 10:35:39.888227  496139 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 10:35:40.514426  496139 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 10:35:40.514727  496139 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 10:35:41.199650  496139 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 10:35:42.128848  496139 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 10:35:43.243309  496139 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 10:35:43.949534  496139 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 10:35:44.259473  496139 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 10:35:44.260055  496139 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 10:35:44.262602  496139 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1025 10:35:41.297101  492025 node_ready.go:57] node "no-preload-768303" has "Ready":"False" status (will retry)
	W1025 10:35:43.297143  492025 node_ready.go:57] node "no-preload-768303" has "Ready":"False" status (will retry)
	I1025 10:35:44.265902  496139 out.go:252]   - Booting up control plane ...
	I1025 10:35:44.266010  496139 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 10:35:44.266092  496139 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 10:35:44.266161  496139 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 10:35:44.288811  496139 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 10:35:44.289369  496139 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 10:35:44.299435  496139 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 10:35:44.300212  496139 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 10:35:44.300390  496139 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 10:35:44.451625  496139 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 10:35:44.451775  496139 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 10:35:45.953470  496139 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501762581s
	I1025 10:35:45.957391  496139 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 10:35:45.957514  496139 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1025 10:35:45.957875  496139 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 10:35:45.957968  496139 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1025 10:35:45.301678  492025 node_ready.go:57] node "no-preload-768303" has "Ready":"False" status (will retry)
	I1025 10:35:46.796154  492025 node_ready.go:49] node "no-preload-768303" is "Ready"
	I1025 10:35:46.796188  492025 node_ready.go:38] duration metric: took 14.503306355s for node "no-preload-768303" to be "Ready" ...
	I1025 10:35:46.796203  492025 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:35:46.796266  492025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:35:46.813938  492025 api_server.go:72] duration metric: took 16.260672877s to wait for apiserver process to appear ...
	I1025 10:35:46.813966  492025 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:35:46.813990  492025 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1025 10:35:46.825736  492025 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1025 10:35:46.826798  492025 api_server.go:141] control plane version: v1.34.1
	I1025 10:35:46.826824  492025 api_server.go:131] duration metric: took 12.849832ms to wait for apiserver health ...
	I1025 10:35:46.826835  492025 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:35:46.830601  492025 system_pods.go:59] 8 kube-system pods found
	I1025 10:35:46.830648  492025 system_pods.go:61] "coredns-66bc5c9577-xpwdq" [0aaecbc4-29e5-45e1-ad80-b2465476ab96] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:35:46.830658  492025 system_pods.go:61] "etcd-no-preload-768303" [dc4f2360-5fc6-4c24-838f-e552e5061d50] Running
	I1025 10:35:46.830668  492025 system_pods.go:61] "kindnet-gkbg7" [2844e492-0201-4963-9c6c-74f19df0adea] Running
	I1025 10:35:46.830673  492025 system_pods.go:61] "kube-apiserver-no-preload-768303" [1889e503-da89-437e-b901-e173c80ee724] Running
	I1025 10:35:46.830684  492025 system_pods.go:61] "kube-controller-manager-no-preload-768303" [cc11bec4-5fcb-419f-a007-3023c54e5bd0] Running
	I1025 10:35:46.830693  492025 system_pods.go:61] "kube-proxy-m9bnn" [d1ef05c2-0d0d-43f8-9bb8-f77839881a24] Running
	I1025 10:35:46.830703  492025 system_pods.go:61] "kube-scheduler-no-preload-768303" [bb7736c6-4301-4202-b122-d1fb345fb94a] Running
	I1025 10:35:46.830708  492025 system_pods.go:61] "storage-provisioner" [89da7f26-c2be-43b2-817c-6c2621a97a30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:35:46.830716  492025 system_pods.go:74] duration metric: took 3.873564ms to wait for pod list to return data ...
	I1025 10:35:46.830728  492025 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:35:46.833432  492025 default_sa.go:45] found service account: "default"
	I1025 10:35:46.833461  492025 default_sa.go:55] duration metric: took 2.726575ms for default service account to be created ...
	I1025 10:35:46.833470  492025 system_pods.go:116] waiting for k8s-apps to be running ...
	I1025 10:35:46.836392  492025 system_pods.go:86] 8 kube-system pods found
	I1025 10:35:46.836424  492025 system_pods.go:89] "coredns-66bc5c9577-xpwdq" [0aaecbc4-29e5-45e1-ad80-b2465476ab96] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:35:46.836431  492025 system_pods.go:89] "etcd-no-preload-768303" [dc4f2360-5fc6-4c24-838f-e552e5061d50] Running
	I1025 10:35:46.836450  492025 system_pods.go:89] "kindnet-gkbg7" [2844e492-0201-4963-9c6c-74f19df0adea] Running
	I1025 10:35:46.836456  492025 system_pods.go:89] "kube-apiserver-no-preload-768303" [1889e503-da89-437e-b901-e173c80ee724] Running
	I1025 10:35:46.836467  492025 system_pods.go:89] "kube-controller-manager-no-preload-768303" [cc11bec4-5fcb-419f-a007-3023c54e5bd0] Running
	I1025 10:35:46.836471  492025 system_pods.go:89] "kube-proxy-m9bnn" [d1ef05c2-0d0d-43f8-9bb8-f77839881a24] Running
	I1025 10:35:46.836476  492025 system_pods.go:89] "kube-scheduler-no-preload-768303" [bb7736c6-4301-4202-b122-d1fb345fb94a] Running
	I1025 10:35:46.836489  492025 system_pods.go:89] "storage-provisioner" [89da7f26-c2be-43b2-817c-6c2621a97a30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:35:46.836507  492025 retry.go:31] will retry after 245.175314ms: missing components: kube-dns
	I1025 10:35:47.096084  492025 system_pods.go:86] 8 kube-system pods found
	I1025 10:35:47.096123  492025 system_pods.go:89] "coredns-66bc5c9577-xpwdq" [0aaecbc4-29e5-45e1-ad80-b2465476ab96] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:35:47.096135  492025 system_pods.go:89] "etcd-no-preload-768303" [dc4f2360-5fc6-4c24-838f-e552e5061d50] Running
	I1025 10:35:47.096141  492025 system_pods.go:89] "kindnet-gkbg7" [2844e492-0201-4963-9c6c-74f19df0adea] Running
	I1025 10:35:47.096147  492025 system_pods.go:89] "kube-apiserver-no-preload-768303" [1889e503-da89-437e-b901-e173c80ee724] Running
	I1025 10:35:47.096152  492025 system_pods.go:89] "kube-controller-manager-no-preload-768303" [cc11bec4-5fcb-419f-a007-3023c54e5bd0] Running
	I1025 10:35:47.096156  492025 system_pods.go:89] "kube-proxy-m9bnn" [d1ef05c2-0d0d-43f8-9bb8-f77839881a24] Running
	I1025 10:35:47.096159  492025 system_pods.go:89] "kube-scheduler-no-preload-768303" [bb7736c6-4301-4202-b122-d1fb345fb94a] Running
	I1025 10:35:47.096169  492025 system_pods.go:89] "storage-provisioner" [89da7f26-c2be-43b2-817c-6c2621a97a30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:35:47.096182  492025 retry.go:31] will retry after 327.446637ms: missing components: kube-dns
	I1025 10:35:47.443321  492025 system_pods.go:86] 8 kube-system pods found
	I1025 10:35:47.443358  492025 system_pods.go:89] "coredns-66bc5c9577-xpwdq" [0aaecbc4-29e5-45e1-ad80-b2465476ab96] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:35:47.443366  492025 system_pods.go:89] "etcd-no-preload-768303" [dc4f2360-5fc6-4c24-838f-e552e5061d50] Running
	I1025 10:35:47.443372  492025 system_pods.go:89] "kindnet-gkbg7" [2844e492-0201-4963-9c6c-74f19df0adea] Running
	I1025 10:35:47.443378  492025 system_pods.go:89] "kube-apiserver-no-preload-768303" [1889e503-da89-437e-b901-e173c80ee724] Running
	I1025 10:35:47.443383  492025 system_pods.go:89] "kube-controller-manager-no-preload-768303" [cc11bec4-5fcb-419f-a007-3023c54e5bd0] Running
	I1025 10:35:47.443387  492025 system_pods.go:89] "kube-proxy-m9bnn" [d1ef05c2-0d0d-43f8-9bb8-f77839881a24] Running
	I1025 10:35:47.443391  492025 system_pods.go:89] "kube-scheduler-no-preload-768303" [bb7736c6-4301-4202-b122-d1fb345fb94a] Running
	I1025 10:35:47.443401  492025 system_pods.go:89] "storage-provisioner" [89da7f26-c2be-43b2-817c-6c2621a97a30] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1025 10:35:47.443418  492025 retry.go:31] will retry after 298.548705ms: missing components: kube-dns
	I1025 10:35:47.747559  492025 system_pods.go:86] 8 kube-system pods found
	I1025 10:35:47.747593  492025 system_pods.go:89] "coredns-66bc5c9577-xpwdq" [0aaecbc4-29e5-45e1-ad80-b2465476ab96] Running
	I1025 10:35:47.747600  492025 system_pods.go:89] "etcd-no-preload-768303" [dc4f2360-5fc6-4c24-838f-e552e5061d50] Running
	I1025 10:35:47.747605  492025 system_pods.go:89] "kindnet-gkbg7" [2844e492-0201-4963-9c6c-74f19df0adea] Running
	I1025 10:35:47.747609  492025 system_pods.go:89] "kube-apiserver-no-preload-768303" [1889e503-da89-437e-b901-e173c80ee724] Running
	I1025 10:35:47.747614  492025 system_pods.go:89] "kube-controller-manager-no-preload-768303" [cc11bec4-5fcb-419f-a007-3023c54e5bd0] Running
	I1025 10:35:47.747618  492025 system_pods.go:89] "kube-proxy-m9bnn" [d1ef05c2-0d0d-43f8-9bb8-f77839881a24] Running
	I1025 10:35:47.747622  492025 system_pods.go:89] "kube-scheduler-no-preload-768303" [bb7736c6-4301-4202-b122-d1fb345fb94a] Running
	I1025 10:35:47.747626  492025 system_pods.go:89] "storage-provisioner" [89da7f26-c2be-43b2-817c-6c2621a97a30] Running
	I1025 10:35:47.747633  492025 system_pods.go:126] duration metric: took 914.157593ms to wait for k8s-apps to be running ...
	I1025 10:35:47.747645  492025 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:35:47.747701  492025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:35:47.770357  492025 system_svc.go:56] duration metric: took 22.702207ms WaitForService to wait for kubelet
	I1025 10:35:47.770383  492025 kubeadm.go:586] duration metric: took 17.217123335s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:35:47.770403  492025 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:35:47.773820  492025 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:35:47.773865  492025 node_conditions.go:123] node cpu capacity is 2
	I1025 10:35:47.773878  492025 node_conditions.go:105] duration metric: took 3.468914ms to run NodePressure ...
	I1025 10:35:47.773900  492025 start.go:241] waiting for startup goroutines ...
	I1025 10:35:47.773914  492025 start.go:246] waiting for cluster config update ...
	I1025 10:35:47.773934  492025 start.go:255] writing updated cluster config ...
	I1025 10:35:47.774288  492025 ssh_runner.go:195] Run: rm -f paused
	I1025 10:35:47.783831  492025 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:35:47.787579  492025 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xpwdq" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:35:47.796854  492025 pod_ready.go:94] pod "coredns-66bc5c9577-xpwdq" is "Ready"
	I1025 10:35:47.796890  492025 pod_ready.go:86] duration metric: took 9.273897ms for pod "coredns-66bc5c9577-xpwdq" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:35:47.802738  492025 pod_ready.go:83] waiting for pod "etcd-no-preload-768303" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:35:47.811975  492025 pod_ready.go:94] pod "etcd-no-preload-768303" is "Ready"
	I1025 10:35:47.812001  492025 pod_ready.go:86] duration metric: took 9.229728ms for pod "etcd-no-preload-768303" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:35:47.814579  492025 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-768303" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:35:47.821178  492025 pod_ready.go:94] pod "kube-apiserver-no-preload-768303" is "Ready"
	I1025 10:35:47.821209  492025 pod_ready.go:86] duration metric: took 6.600804ms for pod "kube-apiserver-no-preload-768303" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:35:47.827372  492025 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-768303" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:35:48.188143  492025 pod_ready.go:94] pod "kube-controller-manager-no-preload-768303" is "Ready"
	I1025 10:35:48.188172  492025 pod_ready.go:86] duration metric: took 360.769381ms for pod "kube-controller-manager-no-preload-768303" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:35:48.388957  492025 pod_ready.go:83] waiting for pod "kube-proxy-m9bnn" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:35:48.788647  492025 pod_ready.go:94] pod "kube-proxy-m9bnn" is "Ready"
	I1025 10:35:48.788738  492025 pod_ready.go:86] duration metric: took 399.711479ms for pod "kube-proxy-m9bnn" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:35:48.988464  492025 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-768303" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:35:49.388263  492025 pod_ready.go:94] pod "kube-scheduler-no-preload-768303" is "Ready"
	I1025 10:35:49.388330  492025 pod_ready.go:86] duration metric: took 399.797147ms for pod "kube-scheduler-no-preload-768303" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:35:49.388357  492025 pod_ready.go:40] duration metric: took 1.604481041s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:35:49.491358  492025 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 10:35:49.494749  492025 out.go:179] * Done! kubectl is now configured to use "no-preload-768303" cluster and "default" namespace by default
	I1025 10:35:51.879418  496139 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 5.920971789s
	I1025 10:35:52.640577  496139 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.68309712s
	I1025 10:35:53.959798  496139 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 8.001926144s
	I1025 10:35:53.981857  496139 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 10:35:54.007366  496139 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 10:35:54.032732  496139 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 10:35:54.032942  496139 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-491554 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 10:35:54.046422  496139 kubeadm.go:318] [bootstrap-token] Using token: v775vr.d5u8fng82rptj6kr
	I1025 10:35:54.049341  496139 out.go:252]   - Configuring RBAC rules ...
	I1025 10:35:54.049468  496139 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 10:35:54.058531  496139 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 10:35:54.072319  496139 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 10:35:54.078266  496139 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 10:35:54.084488  496139 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 10:35:54.092651  496139 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 10:35:54.366906  496139 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 10:35:54.825169  496139 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 10:35:55.367126  496139 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 10:35:55.368310  496139 kubeadm.go:318] 
	I1025 10:35:55.368402  496139 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 10:35:55.368413  496139 kubeadm.go:318] 
	I1025 10:35:55.368495  496139 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 10:35:55.368505  496139 kubeadm.go:318] 
	I1025 10:35:55.368531  496139 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 10:35:55.368604  496139 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 10:35:55.368660  496139 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 10:35:55.368669  496139 kubeadm.go:318] 
	I1025 10:35:55.368726  496139 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 10:35:55.368734  496139 kubeadm.go:318] 
	I1025 10:35:55.368784  496139 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 10:35:55.368789  496139 kubeadm.go:318] 
	I1025 10:35:55.368843  496139 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 10:35:55.368926  496139 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 10:35:55.369003  496139 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 10:35:55.369012  496139 kubeadm.go:318] 
	I1025 10:35:55.369100  496139 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 10:35:55.369185  496139 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 10:35:55.369192  496139 kubeadm.go:318] 
	I1025 10:35:55.369301  496139 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token v775vr.d5u8fng82rptj6kr \
	I1025 10:35:55.369409  496139 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3fbde57ac93e639221cdfe118779e4c6254e9915ac4b07aafbe95a9f86bce8d0 \
	I1025 10:35:55.369432  496139 kubeadm.go:318] 	--control-plane 
	I1025 10:35:55.369437  496139 kubeadm.go:318] 
	I1025 10:35:55.369525  496139 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 10:35:55.369530  496139 kubeadm.go:318] 
	I1025 10:35:55.369615  496139 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token v775vr.d5u8fng82rptj6kr \
	I1025 10:35:55.369721  496139 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3fbde57ac93e639221cdfe118779e4c6254e9915ac4b07aafbe95a9f86bce8d0 
	I1025 10:35:55.376260  496139 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1025 10:35:55.376506  496139 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1025 10:35:55.376620  496139 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 10:35:55.376636  496139 cni.go:84] Creating CNI manager for ""
	I1025 10:35:55.376644  496139 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:35:55.379781  496139 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1025 10:35:55.382673  496139 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 10:35:55.387064  496139 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 10:35:55.387084  496139 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 10:35:55.402565  496139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 10:35:55.702728  496139 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 10:35:55.702828  496139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:35:55.702873  496139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-491554 minikube.k8s.io/updated_at=2025_10_25T10_35_55_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53 minikube.k8s.io/name=newest-cni-491554 minikube.k8s.io/primary=true
	I1025 10:35:55.856331  496139 ops.go:34] apiserver oom_adj: -16
	I1025 10:35:55.856442  496139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:35:56.356494  496139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:35:56.857056  496139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:35:57.357392  496139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:35:57.857253  496139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:35:58.357082  496139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:35:58.856587  496139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:35:59.356518  496139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:35:59.856634  496139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:36:00.356574  496139 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:36:00.745412  496139 kubeadm.go:1113] duration metric: took 5.04263903s to wait for elevateKubeSystemPrivileges
	I1025 10:36:00.745438  496139 kubeadm.go:402] duration metric: took 24.899855585s to StartCluster
	I1025 10:36:00.745456  496139 settings.go:142] acquiring lock: {Name:mkea67a33832f2ea491a4c745ccd836174c61655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:36:00.745515  496139 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:36:00.746513  496139 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/kubeconfig: {Name:mk3dba620aff9c31a3afd43d9db4d9ff5be75367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:36:00.746753  496139 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:36:00.746914  496139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 10:36:00.747223  496139 config.go:182] Loaded profile config "newest-cni-491554": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:36:00.747277  496139 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:36:00.747343  496139 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-491554"
	I1025 10:36:00.747356  496139 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-491554"
	I1025 10:36:00.747377  496139 host.go:66] Checking if "newest-cni-491554" exists ...
	I1025 10:36:00.747892  496139 addons.go:69] Setting default-storageclass=true in profile "newest-cni-491554"
	I1025 10:36:00.747917  496139 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-491554"
	I1025 10:36:00.748154  496139 cli_runner.go:164] Run: docker container inspect newest-cni-491554 --format={{.State.Status}}
	I1025 10:36:00.748214  496139 cli_runner.go:164] Run: docker container inspect newest-cni-491554 --format={{.State.Status}}
	I1025 10:36:00.755917  496139 out.go:179] * Verifying Kubernetes components...
	I1025 10:36:00.762588  496139 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:36:00.797233  496139 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:36:00.799920  496139 addons.go:238] Setting addon default-storageclass=true in "newest-cni-491554"
	I1025 10:36:00.799967  496139 host.go:66] Checking if "newest-cni-491554" exists ...
	I1025 10:36:00.800209  496139 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:36:00.800225  496139 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:36:00.800280  496139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-491554
	I1025 10:36:00.804713  496139 cli_runner.go:164] Run: docker container inspect newest-cni-491554 --format={{.State.Status}}
	I1025 10:36:00.827253  496139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33457 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/newest-cni-491554/id_rsa Username:docker}
	I1025 10:36:00.852190  496139 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:36:00.852209  496139 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:36:00.852283  496139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-491554
	I1025 10:36:00.881033  496139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33457 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/newest-cni-491554/id_rsa Username:docker}
	I1025 10:36:01.290155  496139 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:36:01.365967  496139 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:36:01.366156  496139 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 10:36:01.507649  496139 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:36:02.403785  496139 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.113587852s)
	I1025 10:36:02.403840  496139 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.037665874s)
	I1025 10:36:02.403851  496139 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1025 10:36:02.405070  496139 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.039074677s)
	I1025 10:36:02.405869  496139 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:36:02.405916  496139 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:36:02.442064  496139 api_server.go:72] duration metric: took 1.695281945s to wait for apiserver process to appear ...
	I1025 10:36:02.442137  496139 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:36:02.442170  496139 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:36:02.469277  496139 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1025 10:36:02.472609  496139 api_server.go:141] control plane version: v1.34.1
	I1025 10:36:02.472634  496139 api_server.go:131] duration metric: took 30.477484ms to wait for apiserver health ...
	I1025 10:36:02.472643  496139 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:36:02.485133  496139 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1025 10:36:02.487427  496139 system_pods.go:59] 9 kube-system pods found
	I1025 10:36:02.487468  496139 system_pods.go:61] "coredns-66bc5c9577-psbjx" [22d55d7c-c039-4dd5-8240-aec208d26cca] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1025 10:36:02.487476  496139 system_pods.go:61] "coredns-66bc5c9577-zxmft" [c65f8d6e-61d0-4d82-b7af-b758693a7232] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1025 10:36:02.487484  496139 system_pods.go:61] "etcd-newest-cni-491554" [f243a6c8-1369-43b7-99b9-76822aea8145] Running
	I1025 10:36:02.487489  496139 system_pods.go:61] "kindnet-p6hkm" [b1e90261-b931-4949-be7a-bb6e26597d55] Running
	I1025 10:36:02.487497  496139 system_pods.go:61] "kube-apiserver-newest-cni-491554" [571dea08-1bde-4093-8f36-82f161cbd707] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:36:02.487506  496139 system_pods.go:61] "kube-controller-manager-newest-cni-491554" [c72d8953-fb33-4daa-b825-2b161239fc0e] Running
	I1025 10:36:02.487511  496139 system_pods.go:61] "kube-proxy-vwhfz" [151013b4-f8bd-444f-b983-7fd1136a2003] Running
	I1025 10:36:02.487523  496139 system_pods.go:61] "kube-scheduler-newest-cni-491554" [14ab0451-bfca-4937-8ac3-892c41c89d45] Running
	I1025 10:36:02.487528  496139 system_pods.go:61] "storage-provisioner" [a653f861-3131-49d6-aa3d-4f280a2d5535] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1025 10:36:02.487535  496139 system_pods.go:74] duration metric: took 14.884816ms to wait for pod list to return data ...
	I1025 10:36:02.487548  496139 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:36:02.489641  496139 addons.go:514] duration metric: took 1.742345952s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1025 10:36:02.501502  496139 default_sa.go:45] found service account: "default"
	I1025 10:36:02.501531  496139 default_sa.go:55] duration metric: took 13.975797ms for default service account to be created ...
	I1025 10:36:02.501544  496139 kubeadm.go:586] duration metric: took 1.754767639s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 10:36:02.501585  496139 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:36:02.508526  496139 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:36:02.508561  496139 node_conditions.go:123] node cpu capacity is 2
	I1025 10:36:02.508606  496139 node_conditions.go:105] duration metric: took 7.014094ms to run NodePressure ...
	I1025 10:36:02.508631  496139 start.go:241] waiting for startup goroutines ...
	I1025 10:36:02.908112  496139 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-491554" context rescaled to 1 replicas
	I1025 10:36:02.908143  496139 start.go:246] waiting for cluster config update ...
	I1025 10:36:02.908156  496139 start.go:255] writing updated cluster config ...
	I1025 10:36:02.908447  496139 ssh_runner.go:195] Run: rm -f paused
	I1025 10:36:02.992193  496139 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 10:36:02.995909  496139 out.go:179] * Done! kubectl is now configured to use "newest-cni-491554" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 25 10:36:00 newest-cni-491554 crio[837]: time="2025-10-25T10:36:00.282749522Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:36:00 newest-cni-491554 crio[837]: time="2025-10-25T10:36:00.28757757Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=c98c7acf-0922-45a4-9cf1-ed796b90acc1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:36:00 newest-cni-491554 crio[837]: time="2025-10-25T10:36:00.297860369Z" level=info msg="Ran pod sandbox bb979804882fdbcd711a237f46c511afb451c42f5d8e1afe9ad892225ad97456 with infra container: kube-system/kube-proxy-vwhfz/POD" id=c98c7acf-0922-45a4-9cf1-ed796b90acc1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:36:00 newest-cni-491554 crio[837]: time="2025-10-25T10:36:00.302708635Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=5547540a-a279-43c2-9048-e516499fe742 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:36:00 newest-cni-491554 crio[837]: time="2025-10-25T10:36:00.309298862Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=40826031-f31c-4ea7-b4d4-be0d756d8172 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:36:00 newest-cni-491554 crio[837]: time="2025-10-25T10:36:00.329194975Z" level=info msg="Creating container: kube-system/kube-proxy-vwhfz/kube-proxy" id=23a9ff9c-71f8-4d70-b703-15d15abf8adb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:36:00 newest-cni-491554 crio[837]: time="2025-10-25T10:36:00.329334128Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:36:00 newest-cni-491554 crio[837]: time="2025-10-25T10:36:00.33497293Z" level=info msg="Running pod sandbox: kube-system/kindnet-p6hkm/POD" id=83b1d132-3c54-4ad4-8239-333350423502 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:36:00 newest-cni-491554 crio[837]: time="2025-10-25T10:36:00.341955393Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:36:00 newest-cni-491554 crio[837]: time="2025-10-25T10:36:00.364046011Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=83b1d132-3c54-4ad4-8239-333350423502 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:36:00 newest-cni-491554 crio[837]: time="2025-10-25T10:36:00.365472528Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:36:00 newest-cni-491554 crio[837]: time="2025-10-25T10:36:00.373183938Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:36:00 newest-cni-491554 crio[837]: time="2025-10-25T10:36:00.411251632Z" level=info msg="Ran pod sandbox 766adf61227688b54678cc4d8ad04a9294b80ace7be58d3c5c2f8f7b95945116 with infra container: kube-system/kindnet-p6hkm/POD" id=83b1d132-3c54-4ad4-8239-333350423502 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:36:00 newest-cni-491554 crio[837]: time="2025-10-25T10:36:00.431027998Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=ac187ce8-3cff-444b-b709-e1d6496c0350 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:36:00 newest-cni-491554 crio[837]: time="2025-10-25T10:36:00.459004389Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=f25b2049-d4b7-48e3-ad33-06361d243327 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:36:00 newest-cni-491554 crio[837]: time="2025-10-25T10:36:00.486956032Z" level=info msg="Creating container: kube-system/kindnet-p6hkm/kindnet-cni" id=943726cb-f8b4-4884-905c-d20e0f28b66b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:36:00 newest-cni-491554 crio[837]: time="2025-10-25T10:36:00.487291266Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:36:00 newest-cni-491554 crio[837]: time="2025-10-25T10:36:00.526537392Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:36:00 newest-cni-491554 crio[837]: time="2025-10-25T10:36:00.528226157Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:36:00 newest-cni-491554 crio[837]: time="2025-10-25T10:36:00.584941691Z" level=info msg="Created container 0b5c396e04e136770b3626e5b1b3cd4b848fbf75d5664a4f37ef57ca3803c416: kube-system/kube-proxy-vwhfz/kube-proxy" id=23a9ff9c-71f8-4d70-b703-15d15abf8adb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:36:00 newest-cni-491554 crio[837]: time="2025-10-25T10:36:00.595441526Z" level=info msg="Starting container: 0b5c396e04e136770b3626e5b1b3cd4b848fbf75d5664a4f37ef57ca3803c416" id=60da77a7-fd78-4547-96fc-3d8c2d6c5c82 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:36:00 newest-cni-491554 crio[837]: time="2025-10-25T10:36:00.605931582Z" level=info msg="Started container" PID=1428 containerID=0b5c396e04e136770b3626e5b1b3cd4b848fbf75d5664a4f37ef57ca3803c416 description=kube-system/kube-proxy-vwhfz/kube-proxy id=60da77a7-fd78-4547-96fc-3d8c2d6c5c82 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bb979804882fdbcd711a237f46c511afb451c42f5d8e1afe9ad892225ad97456
	Oct 25 10:36:00 newest-cni-491554 crio[837]: time="2025-10-25T10:36:00.643105592Z" level=info msg="Created container 87101ee5c29cffcdb6f896a055cc92f3e852e1032f451adf54c5633ade4f689a: kube-system/kindnet-p6hkm/kindnet-cni" id=943726cb-f8b4-4884-905c-d20e0f28b66b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:36:00 newest-cni-491554 crio[837]: time="2025-10-25T10:36:00.644047745Z" level=info msg="Starting container: 87101ee5c29cffcdb6f896a055cc92f3e852e1032f451adf54c5633ade4f689a" id=74486209-df25-4f3d-b5f4-59a24767a47b name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:36:00 newest-cni-491554 crio[837]: time="2025-10-25T10:36:00.665582022Z" level=info msg="Started container" PID=1433 containerID=87101ee5c29cffcdb6f896a055cc92f3e852e1032f451adf54c5633ade4f689a description=kube-system/kindnet-p6hkm/kindnet-cni id=74486209-df25-4f3d-b5f4-59a24767a47b name=/runtime.v1.RuntimeService/StartContainer sandboxID=766adf61227688b54678cc4d8ad04a9294b80ace7be58d3c5c2f8f7b95945116
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	87101ee5c29cf       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   3 seconds ago       Running             kindnet-cni               0                   766adf6122768       kindnet-p6hkm                               kube-system
	0b5c396e04e13       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   4 seconds ago       Running             kube-proxy                0                   bb979804882fd       kube-proxy-vwhfz                            kube-system
	0a74716a17983       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   18 seconds ago      Running             kube-scheduler            0                   0dc3c9cb63736       kube-scheduler-newest-cni-491554            kube-system
	62f7cd0b2ed9a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   18 seconds ago      Running             kube-controller-manager   0                   223c3d3dc07a2       kube-controller-manager-newest-cni-491554   kube-system
	debb755186136       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   18 seconds ago      Running             kube-apiserver            0                   fecff723ad6e9       kube-apiserver-newest-cni-491554            kube-system
	9c9e4ce11d174       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   18 seconds ago      Running             etcd                      0                   fa77de5769068       etcd-newest-cni-491554                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-491554
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-491554
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=newest-cni-491554
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_35_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:35:51 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-491554
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:35:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:35:55 +0000   Sat, 25 Oct 2025 10:35:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:35:55 +0000   Sat, 25 Oct 2025 10:35:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:35:55 +0000   Sat, 25 Oct 2025 10:35:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 25 Oct 2025 10:35:55 +0000   Sat, 25 Oct 2025 10:35:46 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-491554
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                0f4f685b-8865-430c-806a-9e13f4963eb6
	  Boot ID:                    3729fb0b-b441-4078-a4f7-ae0fb40e9fa4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-491554                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11s
	  kube-system                 kindnet-p6hkm                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5s
	  kube-system                 kube-apiserver-newest-cni-491554             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-controller-manager-newest-cni-491554    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-proxy-vwhfz                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5s
	  kube-system                 kube-scheduler-newest-cni-491554             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 3s                 kube-proxy       
	  Normal   Starting                 19s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 19s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  19s (x8 over 19s)  kubelet          Node newest-cni-491554 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19s (x8 over 19s)  kubelet          Node newest-cni-491554 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     19s (x8 over 19s)  kubelet          Node newest-cni-491554 status is now: NodeHasSufficientPID
	  Normal   Starting                 10s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9s                 kubelet          Node newest-cni-491554 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s                 kubelet          Node newest-cni-491554 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s                 kubelet          Node newest-cni-491554 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5s                 node-controller  Node newest-cni-491554 event: Registered Node newest-cni-491554 in Controller
	
	
	==> dmesg <==
	[Oct25 10:14] overlayfs: idmapped layers are currently not supported
	[Oct25 10:15] overlayfs: idmapped layers are currently not supported
	[ +16.057450] overlayfs: idmapped layers are currently not supported
	[Oct25 10:16] overlayfs: idmapped layers are currently not supported
	[ +24.917476] overlayfs: idmapped layers are currently not supported
	[Oct25 10:17] overlayfs: idmapped layers are currently not supported
	[ +27.290615] overlayfs: idmapped layers are currently not supported
	[Oct25 10:19] overlayfs: idmapped layers are currently not supported
	[Oct25 10:20] overlayfs: idmapped layers are currently not supported
	[Oct25 10:21] overlayfs: idmapped layers are currently not supported
	[Oct25 10:22] overlayfs: idmapped layers are currently not supported
	[ +31.000692] overlayfs: idmapped layers are currently not supported
	[Oct25 10:24] overlayfs: idmapped layers are currently not supported
	[Oct25 10:26] overlayfs: idmapped layers are currently not supported
	[Oct25 10:27] overlayfs: idmapped layers are currently not supported
	[Oct25 10:28] overlayfs: idmapped layers are currently not supported
	[ +15.077774] overlayfs: idmapped layers are currently not supported
	[Oct25 10:29] overlayfs: idmapped layers are currently not supported
	[Oct25 10:30] overlayfs: idmapped layers are currently not supported
	[Oct25 10:32] overlayfs: idmapped layers are currently not supported
	[ +37.840172] overlayfs: idmapped layers are currently not supported
	[Oct25 10:33] overlayfs: idmapped layers are currently not supported
	[Oct25 10:34] overlayfs: idmapped layers are currently not supported
	[ +38.146630] overlayfs: idmapped layers are currently not supported
	[Oct25 10:35] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9c9e4ce11d174ad6f398dc8c70a8d5fe283954a8fece088023d993c7bf573552] <==
	{"level":"warn","ts":"2025-10-25T10:35:49.636890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:49.688347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:49.723718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:49.765152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:49.819774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:49.830934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:49.855032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:49.889203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:49.936164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:49.979915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:50.020099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:50.055655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:50.111385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:50.155261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:50.189857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:50.208747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:50.243105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:50.268564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:50.291449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:50.321479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:50.361972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:50.395511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:50.420192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:50.453900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:35:50.578417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51090","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:36:04 up  2:18,  0 user,  load average: 4.44, 3.81, 3.26
	Linux newest-cni-491554 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [87101ee5c29cffcdb6f896a055cc92f3e852e1032f451adf54c5633ade4f689a] <==
	I1025 10:36:00.795735       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:36:00.796122       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1025 10:36:00.796267       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:36:00.796280       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:36:00.796335       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:36:01Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:36:01.122292       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:36:01.122320       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:36:01.122330       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:36:01.123038       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [debb755186136ad8b7946f38eb6bb9ccb9db08841e0e0cb97c09025192c59d9e] <==
	I1025 10:35:51.822032       1 policy_source.go:240] refreshing policies
	I1025 10:35:51.888507       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 10:35:51.934052       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:35:51.934187       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	E1025 10:35:51.963361       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1025 10:35:51.977080       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:35:52.009066       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:35:52.011041       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 10:35:52.464809       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1025 10:35:52.471820       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1025 10:35:52.471848       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:35:53.375100       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:35:53.464203       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:35:53.598523       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1025 10:35:53.606542       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1025 10:35:53.607899       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:35:53.614380       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:35:53.958308       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:35:54.801868       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:35:54.824188       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1025 10:35:54.841477       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1025 10:35:59.440732       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:35:59.816961       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:35:59.829101       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1025 10:35:59.917271       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [62f7cd0b2ed9a2022c98bd3f006ca95aa4603a10955cacdf6f9698c3479d1594] <==
	I1025 10:35:59.099380       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 10:35:59.106183       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 10:35:59.108052       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1025 10:35:59.106294       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 10:35:59.106305       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 10:35:59.107449       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 10:35:59.107464       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 10:35:59.107473       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 10:35:59.108142       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 10:35:59.108159       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1025 10:35:59.106281       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 10:35:59.111460       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1025 10:35:59.111687       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 10:35:59.112004       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-491554" podCIDRs=["10.42.0.0/24"]
	I1025 10:35:59.116794       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 10:35:59.117227       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:35:59.124473       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:35:59.126844       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:35:59.126869       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 10:35:59.126877       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 10:35:59.132394       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:35:59.154987       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1025 10:35:59.156606       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1025 10:35:59.156685       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 10:35:59.161194       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	
	
	==> kube-proxy [0b5c396e04e136770b3626e5b1b3cd4b848fbf75d5664a4f37ef57ca3803c416] <==
	I1025 10:36:00.737226       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:36:01.086841       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:36:01.189119       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:36:01.189286       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1025 10:36:01.191641       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:36:01.244289       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:36:01.246115       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:36:01.252801       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:36:01.253450       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:36:01.253745       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:36:01.265124       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:36:01.265225       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:36:01.265670       1 config.go:200] "Starting service config controller"
	I1025 10:36:01.265727       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:36:01.266547       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:36:01.266585       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:36:01.267345       1 config.go:309] "Starting node config controller"
	I1025 10:36:01.267353       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:36:01.365675       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 10:36:01.366841       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 10:36:01.366886       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 10:36:01.367629       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [0a74716a179832dced3f960e37a777de3ec43b37258409f3837d9445c2a9eb26] <==
	I1025 10:35:52.622615       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:35:52.625741       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 10:35:52.625827       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:35:52.625853       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:35:52.625872       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1025 10:35:52.629355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1025 10:35:52.631296       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1025 10:35:52.631630       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1025 10:35:52.631798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1025 10:35:52.633501       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1025 10:35:52.635292       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1025 10:35:52.638193       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1025 10:35:52.638348       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1025 10:35:52.638562       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1025 10:35:52.638826       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1025 10:35:52.638915       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1025 10:35:52.639056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1025 10:35:52.640243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1025 10:35:52.640722       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1025 10:35:52.641091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1025 10:35:52.642261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1025 10:35:52.642459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1025 10:35:52.642650       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1025 10:35:52.643264       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1025 10:35:54.326802       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:35:55 newest-cni-491554 kubelet[1314]: E1025 10:35:55.100475    1314 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-491554\" already exists" pod="kube-system/kube-scheduler-newest-cni-491554"
	Oct 25 10:35:55 newest-cni-491554 kubelet[1314]: I1025 10:35:55.737485    1314 apiserver.go:52] "Watching apiserver"
	Oct 25 10:35:55 newest-cni-491554 kubelet[1314]: I1025 10:35:55.794126    1314 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 25 10:35:55 newest-cni-491554 kubelet[1314]: I1025 10:35:55.876277    1314 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-491554"
	Oct 25 10:35:55 newest-cni-491554 kubelet[1314]: I1025 10:35:55.877487    1314 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-491554"
	Oct 25 10:35:55 newest-cni-491554 kubelet[1314]: E1025 10:35:55.896264    1314 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-491554\" already exists" pod="kube-system/etcd-newest-cni-491554"
	Oct 25 10:35:55 newest-cni-491554 kubelet[1314]: E1025 10:35:55.896515    1314 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-491554\" already exists" pod="kube-system/kube-apiserver-newest-cni-491554"
	Oct 25 10:35:55 newest-cni-491554 kubelet[1314]: I1025 10:35:55.936748    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-491554" podStartSLOduration=2.936727492 podStartE2EDuration="2.936727492s" podCreationTimestamp="2025-10-25 10:35:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:35:55.91825641 +0000 UTC m=+1.278744575" watchObservedRunningTime="2025-10-25 10:35:55.936727492 +0000 UTC m=+1.297215640"
	Oct 25 10:35:55 newest-cni-491554 kubelet[1314]: I1025 10:35:55.959000    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-491554" podStartSLOduration=2.958980903 podStartE2EDuration="2.958980903s" podCreationTimestamp="2025-10-25 10:35:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:35:55.936989125 +0000 UTC m=+1.297477273" watchObservedRunningTime="2025-10-25 10:35:55.958980903 +0000 UTC m=+1.319469051"
	Oct 25 10:35:55 newest-cni-491554 kubelet[1314]: I1025 10:35:55.978417    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-491554" podStartSLOduration=0.978397223 podStartE2EDuration="978.397223ms" podCreationTimestamp="2025-10-25 10:35:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:35:55.959648705 +0000 UTC m=+1.320136861" watchObservedRunningTime="2025-10-25 10:35:55.978397223 +0000 UTC m=+1.338885379"
	Oct 25 10:35:55 newest-cni-491554 kubelet[1314]: I1025 10:35:55.992673    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-491554" podStartSLOduration=0.992654986 podStartE2EDuration="992.654986ms" podCreationTimestamp="2025-10-25 10:35:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:35:55.979233971 +0000 UTC m=+1.339722143" watchObservedRunningTime="2025-10-25 10:35:55.992654986 +0000 UTC m=+1.353143159"
	Oct 25 10:35:59 newest-cni-491554 kubelet[1314]: I1025 10:35:59.135843    1314 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 25 10:35:59 newest-cni-491554 kubelet[1314]: I1025 10:35:59.137273    1314 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 25 10:36:00 newest-cni-491554 kubelet[1314]: I1025 10:36:00.036431    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xn8v\" (UniqueName: \"kubernetes.io/projected/151013b4-f8bd-444f-b983-7fd1136a2003-kube-api-access-8xn8v\") pod \"kube-proxy-vwhfz\" (UID: \"151013b4-f8bd-444f-b983-7fd1136a2003\") " pod="kube-system/kube-proxy-vwhfz"
	Oct 25 10:36:00 newest-cni-491554 kubelet[1314]: I1025 10:36:00.036498    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b1e90261-b931-4949-be7a-bb6e26597d55-xtables-lock\") pod \"kindnet-p6hkm\" (UID: \"b1e90261-b931-4949-be7a-bb6e26597d55\") " pod="kube-system/kindnet-p6hkm"
	Oct 25 10:36:00 newest-cni-491554 kubelet[1314]: I1025 10:36:00.036528    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/151013b4-f8bd-444f-b983-7fd1136a2003-xtables-lock\") pod \"kube-proxy-vwhfz\" (UID: \"151013b4-f8bd-444f-b983-7fd1136a2003\") " pod="kube-system/kube-proxy-vwhfz"
	Oct 25 10:36:00 newest-cni-491554 kubelet[1314]: I1025 10:36:00.036547    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhn4t\" (UniqueName: \"kubernetes.io/projected/b1e90261-b931-4949-be7a-bb6e26597d55-kube-api-access-zhn4t\") pod \"kindnet-p6hkm\" (UID: \"b1e90261-b931-4949-be7a-bb6e26597d55\") " pod="kube-system/kindnet-p6hkm"
	Oct 25 10:36:00 newest-cni-491554 kubelet[1314]: I1025 10:36:00.036570    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/151013b4-f8bd-444f-b983-7fd1136a2003-lib-modules\") pod \"kube-proxy-vwhfz\" (UID: \"151013b4-f8bd-444f-b983-7fd1136a2003\") " pod="kube-system/kube-proxy-vwhfz"
	Oct 25 10:36:00 newest-cni-491554 kubelet[1314]: I1025 10:36:00.036585    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b1e90261-b931-4949-be7a-bb6e26597d55-lib-modules\") pod \"kindnet-p6hkm\" (UID: \"b1e90261-b931-4949-be7a-bb6e26597d55\") " pod="kube-system/kindnet-p6hkm"
	Oct 25 10:36:00 newest-cni-491554 kubelet[1314]: I1025 10:36:00.036604    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b1e90261-b931-4949-be7a-bb6e26597d55-cni-cfg\") pod \"kindnet-p6hkm\" (UID: \"b1e90261-b931-4949-be7a-bb6e26597d55\") " pod="kube-system/kindnet-p6hkm"
	Oct 25 10:36:00 newest-cni-491554 kubelet[1314]: I1025 10:36:00.036624    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/151013b4-f8bd-444f-b983-7fd1136a2003-kube-proxy\") pod \"kube-proxy-vwhfz\" (UID: \"151013b4-f8bd-444f-b983-7fd1136a2003\") " pod="kube-system/kube-proxy-vwhfz"
	Oct 25 10:36:00 newest-cni-491554 kubelet[1314]: I1025 10:36:00.173648    1314 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 25 10:36:00 newest-cni-491554 kubelet[1314]: W1025 10:36:00.296593    1314 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3a1d576c3602867853477f035a9ea60cae7d92cb1d6d6f0519a5f74f9b275216/crio-bb979804882fdbcd711a237f46c511afb451c42f5d8e1afe9ad892225ad97456 WatchSource:0}: Error finding container bb979804882fdbcd711a237f46c511afb451c42f5d8e1afe9ad892225ad97456: Status 404 returned error can't find the container with id bb979804882fdbcd711a237f46c511afb451c42f5d8e1afe9ad892225ad97456
	Oct 25 10:36:00 newest-cni-491554 kubelet[1314]: I1025 10:36:00.931107    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-p6hkm" podStartSLOduration=1.931090575 podStartE2EDuration="1.931090575s" podCreationTimestamp="2025-10-25 10:35:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:36:00.930559859 +0000 UTC m=+6.291048032" watchObservedRunningTime="2025-10-25 10:36:00.931090575 +0000 UTC m=+6.291578723"
	Oct 25 10:36:01 newest-cni-491554 kubelet[1314]: I1025 10:36:01.022285    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vwhfz" podStartSLOduration=2.022188198 podStartE2EDuration="2.022188198s" podCreationTimestamp="2025-10-25 10:35:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-25 10:36:01.022259551 +0000 UTC m=+6.382747731" watchObservedRunningTime="2025-10-25 10:36:01.022188198 +0000 UTC m=+6.382676346"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-491554 -n newest-cni-491554
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-491554 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-zxmft storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-491554 describe pod coredns-66bc5c9577-zxmft storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-491554 describe pod coredns-66bc5c9577-zxmft storage-provisioner: exit status 1 (85.489144ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-zxmft" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-491554 describe pod coredns-66bc5c9577-zxmft storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-491554 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-491554 --alsologtostderr -v=1: exit status 80 (2.498927899s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-491554 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 10:36:24.313108  503488 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:36:24.315998  503488 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:36:24.316036  503488 out.go:374] Setting ErrFile to fd 2...
	I1025 10:36:24.316056  503488 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:36:24.316382  503488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 10:36:24.316710  503488 out.go:368] Setting JSON to false
	I1025 10:36:24.316764  503488 mustload.go:65] Loading cluster: newest-cni-491554
	I1025 10:36:24.317239  503488 config.go:182] Loaded profile config "newest-cni-491554": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:36:24.332681  503488 cli_runner.go:164] Run: docker container inspect newest-cni-491554 --format={{.State.Status}}
	I1025 10:36:24.356578  503488 host.go:66] Checking if "newest-cni-491554" exists ...
	I1025 10:36:24.356876  503488 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:36:24.477660  503488 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-25 10:36:24.467299815 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:36:24.478330  503488 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-491554 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1025 10:36:24.481627  503488 out.go:179] * Pausing node newest-cni-491554 ... 
	I1025 10:36:24.489475  503488 host.go:66] Checking if "newest-cni-491554" exists ...
	I1025 10:36:24.489798  503488 ssh_runner.go:195] Run: systemctl --version
	I1025 10:36:24.489840  503488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-491554
	I1025 10:36:24.516953  503488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33462 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/newest-cni-491554/id_rsa Username:docker}
	I1025 10:36:24.638448  503488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:36:24.654874  503488 pause.go:52] kubelet running: true
	I1025 10:36:24.654935  503488 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:36:24.906588  503488 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:36:24.906670  503488 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:36:25.037062  503488 cri.go:89] found id: "0f77a6ef9d8561d1a3feea16a2c68c896acfb75d20a246ada4bd8351e9b52cb9"
	I1025 10:36:25.037086  503488 cri.go:89] found id: "d0616de1d50d51154f548a875e8a8cffa135066c03d6408a39885ddcfbe06b68"
	I1025 10:36:25.037092  503488 cri.go:89] found id: "9e718b8f8ba6066b9d7651a7a4a242af8577e67cbd7d1100a28b9bd9b2641ae3"
	I1025 10:36:25.037096  503488 cri.go:89] found id: "1eaae55e80cdb8992ea455d72ca1c72e2e47dcca465cf374974d0a5c67ef53d1"
	I1025 10:36:25.037100  503488 cri.go:89] found id: "43ac3c147f037112025c563e7da9ea849a0ea4d6b3c7dd03d13c8c0056a0c719"
	I1025 10:36:25.037105  503488 cri.go:89] found id: "c658171399887c15913e36084cee18927761f14c5dd924e3f041af77c1d24c17"
	I1025 10:36:25.037109  503488 cri.go:89] found id: ""
	I1025 10:36:25.037165  503488 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:36:25.053567  503488 retry.go:31] will retry after 253.020508ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:36:25Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:36:25.306990  503488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:36:25.339643  503488 pause.go:52] kubelet running: false
	I1025 10:36:25.339727  503488 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:36:25.722002  503488 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:36:25.722089  503488 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:36:25.977956  503488 cri.go:89] found id: "0f77a6ef9d8561d1a3feea16a2c68c896acfb75d20a246ada4bd8351e9b52cb9"
	I1025 10:36:25.977977  503488 cri.go:89] found id: "d0616de1d50d51154f548a875e8a8cffa135066c03d6408a39885ddcfbe06b68"
	I1025 10:36:25.977990  503488 cri.go:89] found id: "9e718b8f8ba6066b9d7651a7a4a242af8577e67cbd7d1100a28b9bd9b2641ae3"
	I1025 10:36:25.977994  503488 cri.go:89] found id: "1eaae55e80cdb8992ea455d72ca1c72e2e47dcca465cf374974d0a5c67ef53d1"
	I1025 10:36:25.977997  503488 cri.go:89] found id: "43ac3c147f037112025c563e7da9ea849a0ea4d6b3c7dd03d13c8c0056a0c719"
	I1025 10:36:25.978000  503488 cri.go:89] found id: "c658171399887c15913e36084cee18927761f14c5dd924e3f041af77c1d24c17"
	I1025 10:36:25.978003  503488 cri.go:89] found id: ""
	I1025 10:36:25.978057  503488 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:36:26.016188  503488 retry.go:31] will retry after 272.661267ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:36:25Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:36:26.289686  503488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:36:26.309841  503488 pause.go:52] kubelet running: false
	I1025 10:36:26.309917  503488 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:36:26.541686  503488 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:36:26.541772  503488 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:36:26.692122  503488 cri.go:89] found id: "0f77a6ef9d8561d1a3feea16a2c68c896acfb75d20a246ada4bd8351e9b52cb9"
	I1025 10:36:26.692163  503488 cri.go:89] found id: "d0616de1d50d51154f548a875e8a8cffa135066c03d6408a39885ddcfbe06b68"
	I1025 10:36:26.692169  503488 cri.go:89] found id: "9e718b8f8ba6066b9d7651a7a4a242af8577e67cbd7d1100a28b9bd9b2641ae3"
	I1025 10:36:26.692173  503488 cri.go:89] found id: "1eaae55e80cdb8992ea455d72ca1c72e2e47dcca465cf374974d0a5c67ef53d1"
	I1025 10:36:26.692176  503488 cri.go:89] found id: "43ac3c147f037112025c563e7da9ea849a0ea4d6b3c7dd03d13c8c0056a0c719"
	I1025 10:36:26.692180  503488 cri.go:89] found id: "c658171399887c15913e36084cee18927761f14c5dd924e3f041af77c1d24c17"
	I1025 10:36:26.692183  503488 cri.go:89] found id: ""
	I1025 10:36:26.692264  503488 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:36:26.716233  503488 out.go:203] 
	W1025 10:36:26.719321  503488 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:36:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:36:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 10:36:26.719349  503488 out.go:285] * 
	* 
	W1025 10:36:26.726618  503488 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 10:36:26.733059  503488 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-491554 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-491554
helpers_test.go:243: (dbg) docker inspect newest-cni-491554:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3a1d576c3602867853477f035a9ea60cae7d92cb1d6d6f0519a5f74f9b275216",
	        "Created": "2025-10-25T10:35:24.032490574Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 500595,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:36:07.362157541Z",
	            "FinishedAt": "2025-10-25T10:36:06.425919239Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/3a1d576c3602867853477f035a9ea60cae7d92cb1d6d6f0519a5f74f9b275216/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3a1d576c3602867853477f035a9ea60cae7d92cb1d6d6f0519a5f74f9b275216/hostname",
	        "HostsPath": "/var/lib/docker/containers/3a1d576c3602867853477f035a9ea60cae7d92cb1d6d6f0519a5f74f9b275216/hosts",
	        "LogPath": "/var/lib/docker/containers/3a1d576c3602867853477f035a9ea60cae7d92cb1d6d6f0519a5f74f9b275216/3a1d576c3602867853477f035a9ea60cae7d92cb1d6d6f0519a5f74f9b275216-json.log",
	        "Name": "/newest-cni-491554",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-491554:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-491554",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3a1d576c3602867853477f035a9ea60cae7d92cb1d6d6f0519a5f74f9b275216",
	                "LowerDir": "/var/lib/docker/overlay2/de00d36f59cd60c2fcb113e5a127cd5718597e1a30c7904107c9cf639dfd903c-init/diff:/var/lib/docker/overlay2/56244b4f60319ba406f5ebe406da3f45bd5764c167111669ae2d84794cf94518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/de00d36f59cd60c2fcb113e5a127cd5718597e1a30c7904107c9cf639dfd903c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/de00d36f59cd60c2fcb113e5a127cd5718597e1a30c7904107c9cf639dfd903c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/de00d36f59cd60c2fcb113e5a127cd5718597e1a30c7904107c9cf639dfd903c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-491554",
	                "Source": "/var/lib/docker/volumes/newest-cni-491554/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-491554",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-491554",
	                "name.minikube.sigs.k8s.io": "newest-cni-491554",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "67e480e6479009b235933eefd7fd181bbb525464bbd8b13f0216d777eab3ccf5",
	            "SandboxKey": "/var/run/docker/netns/67e480e64790",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-491554": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:50:64:39:5c:14",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f83aa7d97dd61a3e183e8b61de27687f028a404822311667002b081cafdf7acf",
	                    "EndpointID": "3ef1a9617c1a669de2711d96649092db9b712fb33ac403523906b430415e9636",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-491554",
	                        "3a1d576c3602"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-491554 -n newest-cni-491554
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-491554 -n newest-cni-491554: exit status 2 (556.166837ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-491554 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-491554 logs -n 25: (1.694227604s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-419185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │                     │
	│ stop    │ -p embed-certs-419185 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │ 25 Oct 25 10:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-419185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ start   │ -p embed-certs-419185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ image   │ default-k8s-diff-port-204074 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ pause   │ -p default-k8s-diff-port-204074 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-204074                                                                                                                                                                                                               │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ delete  │ -p default-k8s-diff-port-204074                                                                                                                                                                                                               │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ delete  │ -p disable-driver-mounts-533631                                                                                                                                                                                                               │ disable-driver-mounts-533631 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ start   │ -p no-preload-768303 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-768303            │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:35 UTC │
	│ image   │ embed-certs-419185 image list --format=json                                                                                                                                                                                                   │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │ 25 Oct 25 10:35 UTC │
	│ pause   │ -p embed-certs-419185 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │                     │
	│ delete  │ -p embed-certs-419185                                                                                                                                                                                                                         │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │ 25 Oct 25 10:35 UTC │
	│ delete  │ -p embed-certs-419185                                                                                                                                                                                                                         │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │ 25 Oct 25 10:35 UTC │
	│ start   │ -p newest-cni-491554 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-491554            │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │ 25 Oct 25 10:36 UTC │
	│ addons  │ enable metrics-server -p no-preload-768303 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-768303            │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │                     │
	│ stop    │ -p no-preload-768303 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-768303            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │ 25 Oct 25 10:36 UTC │
	│ addons  │ enable metrics-server -p newest-cni-491554 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-491554            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │                     │
	│ stop    │ -p newest-cni-491554 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-491554            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │ 25 Oct 25 10:36 UTC │
	│ addons  │ enable dashboard -p newest-cni-491554 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-491554            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │ 25 Oct 25 10:36 UTC │
	│ start   │ -p newest-cni-491554 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-491554            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │ 25 Oct 25 10:36 UTC │
	│ addons  │ enable dashboard -p no-preload-768303 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-768303            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │ 25 Oct 25 10:36 UTC │
	│ start   │ -p no-preload-768303 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-768303            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │                     │
	│ image   │ newest-cni-491554 image list --format=json                                                                                                                                                                                                    │ newest-cni-491554            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │ 25 Oct 25 10:36 UTC │
	│ pause   │ -p newest-cni-491554 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-491554            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:36:15
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:36:15.134205  501769 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:36:15.134467  501769 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:36:15.134490  501769 out.go:374] Setting ErrFile to fd 2...
	I1025 10:36:15.134509  501769 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:36:15.134813  501769 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 10:36:15.135292  501769 out.go:368] Setting JSON to false
	I1025 10:36:15.136264  501769 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8325,"bootTime":1761380250,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1025 10:36:15.136361  501769 start.go:141] virtualization:  
	I1025 10:36:15.141508  501769 out.go:179] * [no-preload-768303] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:36:15.144767  501769 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 10:36:15.144847  501769 notify.go:220] Checking for updates...
	I1025 10:36:15.151205  501769 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:36:15.154400  501769 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:36:15.157513  501769 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-292167/.minikube
	I1025 10:36:15.161221  501769 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:36:15.164435  501769 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:36:15.167958  501769 config.go:182] Loaded profile config "no-preload-768303": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:36:15.168648  501769 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:36:15.216762  501769 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:36:15.216884  501769 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:36:15.317007  501769 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-25 10:36:15.306598812 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:36:15.317115  501769 docker.go:318] overlay module found
	I1025 10:36:15.320375  501769 out.go:179] * Using the docker driver based on existing profile
	I1025 10:36:15.323812  501769 start.go:305] selected driver: docker
	I1025 10:36:15.323840  501769 start.go:925] validating driver "docker" against &{Name:no-preload-768303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-768303 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:36:15.323944  501769 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:36:15.324700  501769 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:36:15.416031  501769 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-25 10:36:15.404078968 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:36:15.416360  501769 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:36:15.416379  501769 cni.go:84] Creating CNI manager for ""
	I1025 10:36:15.416441  501769 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:36:15.416485  501769 start.go:349] cluster config:
	{Name:no-preload-768303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-768303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:36:15.420486  501769 out.go:179] * Starting "no-preload-768303" primary control-plane node in "no-preload-768303" cluster
	I1025 10:36:15.424456  501769 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:36:15.427837  501769 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:36:15.431315  501769 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:36:15.431446  501769 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/config.json ...
	I1025 10:36:15.431698  501769 cache.go:107] acquiring lock: {Name:mkcb674bf6bbc265e760bf8be116a57186608a58 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:36:15.431767  501769 cache.go:115] /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1025 10:36:15.431775  501769 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 87.214µs
	I1025 10:36:15.431784  501769 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1025 10:36:15.431795  501769 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:36:15.432007  501769 cache.go:107] acquiring lock: {Name:mkb1799d37a5611969ac9809065db3c631238657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:36:15.432074  501769 cache.go:115] /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1025 10:36:15.432083  501769 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 82.479µs
	I1025 10:36:15.432090  501769 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1025 10:36:15.432117  501769 cache.go:107] acquiring lock: {Name:mk9facf4e59193f96d96012cf82ef7fef364093d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:36:15.432158  501769 cache.go:115] /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1025 10:36:15.432163  501769 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 49.01µs
	I1025 10:36:15.432170  501769 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1025 10:36:15.432180  501769 cache.go:107] acquiring lock: {Name:mk1e264701efd819526cb1327aac37ba6383079c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:36:15.432207  501769 cache.go:115] /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1025 10:36:15.432212  501769 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 33.297µs
	I1025 10:36:15.432218  501769 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1025 10:36:15.432227  501769 cache.go:107] acquiring lock: {Name:mk145e03dafbcb30f74a27f99b5fba1addf06371 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:36:15.432252  501769 cache.go:115] /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1025 10:36:15.432256  501769 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 30.893µs
	I1025 10:36:15.432262  501769 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1025 10:36:15.432272  501769 cache.go:107] acquiring lock: {Name:mk92a2a5fb8dde9e51922a55162996cccaaf10a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:36:15.432303  501769 cache.go:115] /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1025 10:36:15.432309  501769 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 37.654µs
	I1025 10:36:15.432314  501769 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1025 10:36:15.432331  501769 cache.go:107] acquiring lock: {Name:mkd43195497e2780982a3de630a4cda8f1c812f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:36:15.432357  501769 cache.go:115] /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1025 10:36:15.432362  501769 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 33.182µs
	I1025 10:36:15.432368  501769 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1025 10:36:15.432377  501769 cache.go:107] acquiring lock: {Name:mk2866f59a9236262f732426434fc9bafb724b61 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:36:15.432405  501769 cache.go:115] /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1025 10:36:15.432409  501769 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 33.403µs
	I1025 10:36:15.432414  501769 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1025 10:36:15.432420  501769 cache.go:87] Successfully saved all images to host disk.
	I1025 10:36:15.457514  501769 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:36:15.457535  501769 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:36:15.457547  501769 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:36:15.457575  501769 start.go:360] acquireMachinesLock for no-preload-768303: {Name:mkf575e11dd83318b723f79e28f313be28102c7c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:36:15.457624  501769 start.go:364] duration metric: took 33.568µs to acquireMachinesLock for "no-preload-768303"
	I1025 10:36:15.457641  501769 start.go:96] Skipping create...Using existing machine configuration
	I1025 10:36:15.457648  501769 fix.go:54] fixHost starting: 
	I1025 10:36:15.457895  501769 cli_runner.go:164] Run: docker container inspect no-preload-768303 --format={{.State.Status}}
	I1025 10:36:15.490005  501769 fix.go:112] recreateIfNeeded on no-preload-768303: state=Stopped err=<nil>
	W1025 10:36:15.490033  501769 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 10:36:14.186772  500465 kubeadm.go:883] updating cluster {Name:newest-cni-491554 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-491554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:36:14.186899  500465 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:36:14.186980  500465 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:36:14.221841  500465 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:36:14.221866  500465 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:36:14.221930  500465 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:36:14.251629  500465 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:36:14.251652  500465 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:36:14.251660  500465 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1025 10:36:14.251754  500465 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-491554 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-491554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:36:14.251842  500465 ssh_runner.go:195] Run: crio config
	I1025 10:36:14.321848  500465 cni.go:84] Creating CNI manager for ""
	I1025 10:36:14.321934  500465 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:36:14.321968  500465 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1025 10:36:14.322031  500465 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-491554 NodeName:newest-cni-491554 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:36:14.322250  500465 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-491554"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:36:14.322369  500465 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:36:14.332729  500465 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:36:14.332812  500465 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:36:14.340214  500465 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1025 10:36:14.354329  500465 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:36:14.379506  500465 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1025 10:36:14.393570  500465 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:36:14.397259  500465 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:36:14.406626  500465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:36:14.574690  500465 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:36:14.636684  500465 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554 for IP: 192.168.76.2
	I1025 10:36:14.636704  500465 certs.go:195] generating shared ca certs ...
	I1025 10:36:14.636719  500465 certs.go:227] acquiring lock for ca certs: {Name:mk9438893ca511bbb3aa6154ee7b6f94b409696d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:36:14.636878  500465 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key
	I1025 10:36:14.636927  500465 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key
	I1025 10:36:14.636935  500465 certs.go:257] generating profile certs ...
	I1025 10:36:14.637015  500465 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/client.key
	I1025 10:36:14.637064  500465 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/apiserver.key.1df2bdda
	I1025 10:36:14.637141  500465 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/proxy-client.key
	I1025 10:36:14.637253  500465 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem (1338 bytes)
	W1025 10:36:14.637287  500465 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017_empty.pem, impossibly tiny 0 bytes
	I1025 10:36:14.637297  500465 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 10:36:14.637323  500465 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem (1082 bytes)
	I1025 10:36:14.637356  500465 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:36:14.637379  500465 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem (1675 bytes)
	I1025 10:36:14.637423  500465 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 10:36:14.638029  500465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:36:14.683070  500465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:36:14.721261  500465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:36:14.758069  500465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:36:14.808648  500465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 10:36:14.836206  500465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 10:36:14.861029  500465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:36:14.899389  500465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:36:14.943287  500465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem --> /usr/share/ca-certificates/294017.pem (1338 bytes)
	I1025 10:36:14.965867  500465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /usr/share/ca-certificates/2940172.pem (1708 bytes)
	I1025 10:36:14.990341  500465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:36:15.027736  500465 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:36:15.050920  500465 ssh_runner.go:195] Run: openssl version
	I1025 10:36:15.060709  500465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:36:15.088600  500465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:36:15.095196  500465 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:36:15.095320  500465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:36:15.150349  500465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:36:15.160882  500465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294017.pem && ln -fs /usr/share/ca-certificates/294017.pem /etc/ssl/certs/294017.pem"
	I1025 10:36:15.172622  500465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294017.pem
	I1025 10:36:15.180280  500465 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:39 /usr/share/ca-certificates/294017.pem
	I1025 10:36:15.180415  500465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294017.pem
	I1025 10:36:15.235432  500465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294017.pem /etc/ssl/certs/51391683.0"
	I1025 10:36:15.247140  500465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940172.pem && ln -fs /usr/share/ca-certificates/2940172.pem /etc/ssl/certs/2940172.pem"
	I1025 10:36:15.262015  500465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940172.pem
	I1025 10:36:15.267720  500465 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:39 /usr/share/ca-certificates/2940172.pem
	I1025 10:36:15.267859  500465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940172.pem
	I1025 10:36:15.312882  500465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940172.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:36:15.325327  500465 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:36:15.330842  500465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:36:15.377296  500465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:36:15.431072  500465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:36:15.527602  500465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:36:15.618846  500465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:36:15.817691  500465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:36:15.964228  500465 kubeadm.go:400] StartCluster: {Name:newest-cni-491554 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-491554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:36:15.964327  500465 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:36:15.964433  500465 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:36:16.105630  500465 cri.go:89] found id: "9e718b8f8ba6066b9d7651a7a4a242af8577e67cbd7d1100a28b9bd9b2641ae3"
	I1025 10:36:16.105655  500465 cri.go:89] found id: "1eaae55e80cdb8992ea455d72ca1c72e2e47dcca465cf374974d0a5c67ef53d1"
	I1025 10:36:16.105662  500465 cri.go:89] found id: "43ac3c147f037112025c563e7da9ea849a0ea4d6b3c7dd03d13c8c0056a0c719"
	I1025 10:36:16.105666  500465 cri.go:89] found id: "c658171399887c15913e36084cee18927761f14c5dd924e3f041af77c1d24c17"
	I1025 10:36:16.105669  500465 cri.go:89] found id: ""
	I1025 10:36:16.105729  500465 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 10:36:16.152016  500465 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:36:16Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:36:16.152186  500465 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:36:16.173643  500465 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:36:16.173660  500465 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:36:16.173739  500465 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:36:16.204670  500465 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:36:16.205153  500465 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-491554" does not appear in /home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:36:16.205248  500465 kubeconfig.go:62] /home/jenkins/minikube-integration/21794-292167/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-491554" cluster setting kubeconfig missing "newest-cni-491554" context setting]
	I1025 10:36:16.205516  500465 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/kubeconfig: {Name:mk3dba620aff9c31a3afd43d9db4d9ff5be75367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:36:16.207102  500465 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:36:16.228100  500465 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1025 10:36:16.228181  500465 kubeadm.go:601] duration metric: took 54.513753ms to restartPrimaryControlPlane
	I1025 10:36:16.228205  500465 kubeadm.go:402] duration metric: took 263.987643ms to StartCluster
	I1025 10:36:16.228250  500465 settings.go:142] acquiring lock: {Name:mkea67a33832f2ea491a4c745ccd836174c61655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:36:16.228345  500465 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:36:16.229112  500465 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/kubeconfig: {Name:mk3dba620aff9c31a3afd43d9db4d9ff5be75367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:36:16.229656  500465 config.go:182] Loaded profile config "newest-cni-491554": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:36:16.229767  500465 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:36:16.229841  500465 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-491554"
	I1025 10:36:16.229865  500465 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-491554"
	W1025 10:36:16.229872  500465 addons.go:247] addon storage-provisioner should already be in state true
	I1025 10:36:16.229892  500465 host.go:66] Checking if "newest-cni-491554" exists ...
	I1025 10:36:16.230372  500465 cli_runner.go:164] Run: docker container inspect newest-cni-491554 --format={{.State.Status}}
	I1025 10:36:16.229743  500465 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:36:16.230841  500465 addons.go:69] Setting dashboard=true in profile "newest-cni-491554"
	I1025 10:36:16.231316  500465 addons.go:238] Setting addon dashboard=true in "newest-cni-491554"
	W1025 10:36:16.231342  500465 addons.go:247] addon dashboard should already be in state true
	I1025 10:36:16.231392  500465 host.go:66] Checking if "newest-cni-491554" exists ...
	I1025 10:36:16.232092  500465 cli_runner.go:164] Run: docker container inspect newest-cni-491554 --format={{.State.Status}}
	I1025 10:36:16.230855  500465 addons.go:69] Setting default-storageclass=true in profile "newest-cni-491554"
	I1025 10:36:16.238729  500465 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-491554"
	I1025 10:36:16.238763  500465 out.go:179] * Verifying Kubernetes components...
	I1025 10:36:16.243426  500465 cli_runner.go:164] Run: docker container inspect newest-cni-491554 --format={{.State.Status}}
	I1025 10:36:16.247261  500465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:36:16.306653  500465 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:36:16.309814  500465 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:36:16.309839  500465 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:36:16.309912  500465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-491554
	I1025 10:36:16.324113  500465 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 10:36:16.327138  500465 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1025 10:36:16.327945  500465 addons.go:238] Setting addon default-storageclass=true in "newest-cni-491554"
	W1025 10:36:16.327962  500465 addons.go:247] addon default-storageclass should already be in state true
	I1025 10:36:16.327987  500465 host.go:66] Checking if "newest-cni-491554" exists ...
	I1025 10:36:16.328393  500465 cli_runner.go:164] Run: docker container inspect newest-cni-491554 --format={{.State.Status}}
	I1025 10:36:16.330146  500465 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 10:36:16.330166  500465 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 10:36:16.330245  500465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-491554
	I1025 10:36:16.355340  500465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33462 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/newest-cni-491554/id_rsa Username:docker}
	I1025 10:36:16.394125  500465 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:36:16.394148  500465 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:36:16.394223  500465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-491554
	I1025 10:36:16.395868  500465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33462 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/newest-cni-491554/id_rsa Username:docker}
	I1025 10:36:16.431623  500465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33462 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/newest-cni-491554/id_rsa Username:docker}
	I1025 10:36:16.746212  500465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:36:16.787893  500465 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:36:16.874873  500465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:36:16.906769  500465 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 10:36:16.906790  500465 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 10:36:16.994116  500465 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 10:36:16.994150  500465 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 10:36:15.495321  501769 out.go:252] * Restarting existing docker container for "no-preload-768303" ...
	I1025 10:36:15.495401  501769 cli_runner.go:164] Run: docker start no-preload-768303
	I1025 10:36:15.864718  501769 cli_runner.go:164] Run: docker container inspect no-preload-768303 --format={{.State.Status}}
	I1025 10:36:15.895881  501769 kic.go:430] container "no-preload-768303" state is running.
	I1025 10:36:15.896686  501769 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-768303
	I1025 10:36:15.922299  501769 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/config.json ...
	I1025 10:36:15.922526  501769 machine.go:93] provisionDockerMachine start ...
	I1025 10:36:15.922584  501769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:36:15.951361  501769 main.go:141] libmachine: Using SSH client type: native
	I1025 10:36:15.951751  501769 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33467 <nil> <nil>}
	I1025 10:36:15.951762  501769 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:36:15.952562  501769 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35466->127.0.0.1:33467: read: connection reset by peer
	I1025 10:36:19.134912  501769 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-768303
	
	I1025 10:36:19.134992  501769 ubuntu.go:182] provisioning hostname "no-preload-768303"
	I1025 10:36:19.135100  501769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:36:19.167786  501769 main.go:141] libmachine: Using SSH client type: native
	I1025 10:36:19.168102  501769 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33467 <nil> <nil>}
	I1025 10:36:19.168113  501769 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-768303 && echo "no-preload-768303" | sudo tee /etc/hostname
	I1025 10:36:19.373484  501769 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-768303
	
	I1025 10:36:19.373575  501769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:36:19.404494  501769 main.go:141] libmachine: Using SSH client type: native
	I1025 10:36:19.404846  501769 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33467 <nil> <nil>}
	I1025 10:36:19.404868  501769 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-768303' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-768303/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-768303' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:36:19.591589  501769 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:36:19.591618  501769 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-292167/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-292167/.minikube}
	I1025 10:36:19.591649  501769 ubuntu.go:190] setting up certificates
	I1025 10:36:19.591658  501769 provision.go:84] configureAuth start
	I1025 10:36:19.591724  501769 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-768303
	I1025 10:36:19.623368  501769 provision.go:143] copyHostCerts
	I1025 10:36:19.623448  501769 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem, removing ...
	I1025 10:36:19.623470  501769 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem
	I1025 10:36:19.623556  501769 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem (1082 bytes)
	I1025 10:36:19.623701  501769 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem, removing ...
	I1025 10:36:19.623714  501769 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem
	I1025 10:36:19.623746  501769 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem (1123 bytes)
	I1025 10:36:19.623808  501769 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem, removing ...
	I1025 10:36:19.623816  501769 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem
	I1025 10:36:19.623840  501769 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem (1675 bytes)
	I1025 10:36:19.623897  501769 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem org=jenkins.no-preload-768303 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-768303]
	I1025 10:36:17.111593  500465 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 10:36:17.111616  500465 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 10:36:17.180846  500465 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 10:36:17.180870  500465 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 10:36:17.201686  500465 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 10:36:17.201710  500465 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 10:36:17.217738  500465 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 10:36:17.217763  500465 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 10:36:17.232998  500465 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 10:36:17.233028  500465 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 10:36:17.249154  500465 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 10:36:17.249177  500465 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 10:36:17.265124  500465 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:36:17.265156  500465 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 10:36:17.281717  500465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:36:23.039773  500465 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.293479161s)
	I1025 10:36:23.039836  500465 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.251924244s)
	I1025 10:36:23.039870  500465 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:36:23.039924  500465 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:36:23.039992  500465 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.165098657s)
	I1025 10:36:23.155630  500465 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.873866534s)
	I1025 10:36:23.155865  500465 api_server.go:72] duration metric: took 6.925098659s to wait for apiserver process to appear ...
	I1025 10:36:23.155906  500465 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:36:23.155928  500465 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:36:23.158644  500465 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-491554 addons enable metrics-server
	
	I1025 10:36:23.161489  500465 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1025 10:36:23.164344  500465 addons.go:514] duration metric: took 6.934558755s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1025 10:36:23.165017  500465 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1025 10:36:23.166067  500465 api_server.go:141] control plane version: v1.34.1
	I1025 10:36:23.166113  500465 api_server.go:131] duration metric: took 10.197101ms to wait for apiserver health ...
	I1025 10:36:23.166137  500465 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:36:23.170230  500465 system_pods.go:59] 8 kube-system pods found
	I1025 10:36:23.170262  500465 system_pods.go:61] "coredns-66bc5c9577-zxmft" [c65f8d6e-61d0-4d82-b7af-b758693a7232] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1025 10:36:23.170271  500465 system_pods.go:61] "etcd-newest-cni-491554" [f243a6c8-1369-43b7-99b9-76822aea8145] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:36:23.170277  500465 system_pods.go:61] "kindnet-p6hkm" [b1e90261-b931-4949-be7a-bb6e26597d55] Running
	I1025 10:36:23.170285  500465 system_pods.go:61] "kube-apiserver-newest-cni-491554" [571dea08-1bde-4093-8f36-82f161cbd707] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:36:23.170294  500465 system_pods.go:61] "kube-controller-manager-newest-cni-491554" [c72d8953-fb33-4daa-b825-2b161239fc0e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:36:23.170300  500465 system_pods.go:61] "kube-proxy-vwhfz" [151013b4-f8bd-444f-b983-7fd1136a2003] Running
	I1025 10:36:23.170306  500465 system_pods.go:61] "kube-scheduler-newest-cni-491554" [14ab0451-bfca-4937-8ac3-892c41c89d45] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:36:23.170311  500465 system_pods.go:61] "storage-provisioner" [a653f861-3131-49d6-aa3d-4f280a2d5535] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1025 10:36:23.170317  500465 system_pods.go:74] duration metric: took 4.16325ms to wait for pod list to return data ...
	I1025 10:36:23.170324  500465 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:36:23.173021  500465 default_sa.go:45] found service account: "default"
	I1025 10:36:23.173043  500465 default_sa.go:55] duration metric: took 2.713758ms for default service account to be created ...
	I1025 10:36:23.173055  500465 kubeadm.go:586] duration metric: took 6.942290333s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 10:36:23.173071  500465 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:36:23.176019  500465 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:36:23.176044  500465 node_conditions.go:123] node cpu capacity is 2
	I1025 10:36:23.176055  500465 node_conditions.go:105] duration metric: took 2.979879ms to run NodePressure ...
	I1025 10:36:23.176067  500465 start.go:241] waiting for startup goroutines ...
	I1025 10:36:23.176074  500465 start.go:246] waiting for cluster config update ...
	I1025 10:36:23.176086  500465 start.go:255] writing updated cluster config ...
	I1025 10:36:23.176358  500465 ssh_runner.go:195] Run: rm -f paused
	I1025 10:36:23.262559  500465 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 10:36:23.265597  500465 out.go:179] * Done! kubectl is now configured to use "newest-cni-491554" cluster and "default" namespace by default
	I1025 10:36:20.402615  501769 provision.go:177] copyRemoteCerts
	I1025 10:36:20.402678  501769 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:36:20.402722  501769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:36:20.427597  501769 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33467 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/no-preload-768303/id_rsa Username:docker}
	I1025 10:36:20.531788  501769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 10:36:20.557790  501769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 10:36:20.580101  501769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:36:20.605047  501769 provision.go:87] duration metric: took 1.013367052s to configureAuth
	I1025 10:36:20.605078  501769 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:36:20.605267  501769 config.go:182] Loaded profile config "no-preload-768303": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:36:20.605377  501769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:36:20.648338  501769 main.go:141] libmachine: Using SSH client type: native
	I1025 10:36:20.648657  501769 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33467 <nil> <nil>}
	I1025 10:36:20.648672  501769 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:36:21.132971  501769 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:36:21.133056  501769 machine.go:96] duration metric: took 5.210521079s to provisionDockerMachine
	I1025 10:36:21.133132  501769 start.go:293] postStartSetup for "no-preload-768303" (driver="docker")
	I1025 10:36:21.133173  501769 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:36:21.133278  501769 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:36:21.133352  501769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:36:21.161948  501769 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33467 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/no-preload-768303/id_rsa Username:docker}
	I1025 10:36:21.293509  501769 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:36:21.297662  501769 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:36:21.297694  501769 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:36:21.297707  501769 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/addons for local assets ...
	I1025 10:36:21.297760  501769 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/files for local assets ...
	I1025 10:36:21.297840  501769 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem -> 2940172.pem in /etc/ssl/certs
	I1025 10:36:21.297945  501769 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:36:21.310139  501769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 10:36:21.342071  501769 start.go:296] duration metric: took 208.905201ms for postStartSetup
	I1025 10:36:21.342155  501769 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:36:21.342199  501769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:36:21.368705  501769 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33467 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/no-preload-768303/id_rsa Username:docker}
	I1025 10:36:21.486297  501769 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:36:21.498259  501769 fix.go:56] duration metric: took 6.040603128s for fixHost
	I1025 10:36:21.498286  501769 start.go:83] releasing machines lock for "no-preload-768303", held for 6.040654337s
	I1025 10:36:21.498353  501769 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-768303
	I1025 10:36:21.555204  501769 ssh_runner.go:195] Run: cat /version.json
	I1025 10:36:21.555261  501769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:36:21.555490  501769 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:36:21.555545  501769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:36:21.590796  501769 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33467 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/no-preload-768303/id_rsa Username:docker}
	I1025 10:36:21.605143  501769 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33467 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/no-preload-768303/id_rsa Username:docker}
	I1025 10:36:21.830574  501769 ssh_runner.go:195] Run: systemctl --version
	I1025 10:36:21.838491  501769 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:36:21.919585  501769 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:36:21.927981  501769 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:36:21.928054  501769 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:36:21.937518  501769 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:36:21.937541  501769 start.go:495] detecting cgroup driver to use...
	I1025 10:36:21.937575  501769 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:36:21.937647  501769 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:36:21.968555  501769 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:36:21.988720  501769 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:36:21.988785  501769 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:36:22.011902  501769 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:36:22.041384  501769 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:36:22.236447  501769 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:36:22.447225  501769 docker.go:234] disabling docker service ...
	I1025 10:36:22.447296  501769 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:36:22.468717  501769 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:36:22.490154  501769 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:36:22.669862  501769 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:36:22.843706  501769 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:36:22.863903  501769 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:36:22.881980  501769 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:36:22.882084  501769 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:36:22.894301  501769 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:36:22.894406  501769 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:36:22.909110  501769 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:36:22.927140  501769 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:36:22.940477  501769 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:36:22.957224  501769 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:36:22.969570  501769 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:36:22.983321  501769 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:36:23.000680  501769 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:36:23.015979  501769 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:36:23.028088  501769 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:36:23.216577  501769 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:36:23.450492  501769 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:36:23.450571  501769 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:36:23.455624  501769 start.go:563] Will wait 60s for crictl version
	I1025 10:36:23.455705  501769 ssh_runner.go:195] Run: which crictl
	I1025 10:36:23.460123  501769 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:36:23.494481  501769 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:36:23.494574  501769 ssh_runner.go:195] Run: crio --version
	I1025 10:36:23.547353  501769 ssh_runner.go:195] Run: crio --version
	I1025 10:36:23.584542  501769 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:36:23.587636  501769 cli_runner.go:164] Run: docker network inspect no-preload-768303 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:36:23.605658  501769 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1025 10:36:23.609916  501769 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:36:23.633582  501769 kubeadm.go:883] updating cluster {Name:no-preload-768303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-768303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:36:23.633701  501769 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:36:23.633761  501769 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:36:23.685217  501769 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:36:23.685243  501769 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:36:23.685267  501769 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1025 10:36:23.685380  501769 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-768303 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-768303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:36:23.685500  501769 ssh_runner.go:195] Run: crio config
	I1025 10:36:23.790601  501769 cni.go:84] Creating CNI manager for ""
	I1025 10:36:23.790627  501769 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:36:23.790654  501769 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:36:23.790679  501769 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-768303 NodeName:no-preload-768303 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:36:23.790813  501769 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-768303"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:36:23.790888  501769 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:36:23.806255  501769 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:36:23.806337  501769 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:36:23.815583  501769 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1025 10:36:23.837862  501769 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:36:23.855842  501769 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1025 10:36:23.870950  501769 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:36:23.876361  501769 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:36:23.888081  501769 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:36:24.103930  501769 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:36:24.131955  501769 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303 for IP: 192.168.85.2
	I1025 10:36:24.131979  501769 certs.go:195] generating shared ca certs ...
	I1025 10:36:24.131996  501769 certs.go:227] acquiring lock for ca certs: {Name:mk9438893ca511bbb3aa6154ee7b6f94b409696d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:36:24.132171  501769 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key
	I1025 10:36:24.132221  501769 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key
	I1025 10:36:24.132234  501769 certs.go:257] generating profile certs ...
	I1025 10:36:24.132318  501769 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/client.key
	I1025 10:36:24.132387  501769 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/apiserver.key.a4ce95f1
	I1025 10:36:24.132428  501769 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/proxy-client.key
	I1025 10:36:24.132551  501769 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem (1338 bytes)
	W1025 10:36:24.132586  501769 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017_empty.pem, impossibly tiny 0 bytes
	I1025 10:36:24.132599  501769 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 10:36:24.132627  501769 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem (1082 bytes)
	I1025 10:36:24.132655  501769 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:36:24.132680  501769 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem (1675 bytes)
	I1025 10:36:24.132728  501769 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 10:36:24.133388  501769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:36:24.185379  501769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:36:24.248068  501769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:36:24.315354  501769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:36:24.395818  501769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 10:36:24.466312  501769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:36:24.539551  501769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:36:24.581328  501769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:36:24.603280  501769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /usr/share/ca-certificates/2940172.pem (1708 bytes)
	I1025 10:36:24.625708  501769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:36:24.649225  501769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem --> /usr/share/ca-certificates/294017.pem (1338 bytes)
	I1025 10:36:24.672035  501769 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:36:24.704197  501769 ssh_runner.go:195] Run: openssl version
	I1025 10:36:24.713706  501769 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940172.pem && ln -fs /usr/share/ca-certificates/2940172.pem /etc/ssl/certs/2940172.pem"
	I1025 10:36:24.729233  501769 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940172.pem
	I1025 10:36:24.738694  501769 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:39 /usr/share/ca-certificates/2940172.pem
	I1025 10:36:24.738827  501769 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940172.pem
	I1025 10:36:24.784781  501769 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940172.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:36:24.793604  501769 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:36:24.803832  501769 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:36:24.808241  501769 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:36:24.808361  501769 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:36:24.854657  501769 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:36:24.863938  501769 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294017.pem && ln -fs /usr/share/ca-certificates/294017.pem /etc/ssl/certs/294017.pem"
	I1025 10:36:24.872605  501769 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294017.pem
	I1025 10:36:24.877461  501769 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:39 /usr/share/ca-certificates/294017.pem
	I1025 10:36:24.877528  501769 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294017.pem
	I1025 10:36:24.928847  501769 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294017.pem /etc/ssl/certs/51391683.0"
	I1025 10:36:24.940328  501769 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:36:24.945516  501769 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:36:25.025567  501769 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:36:25.104112  501769 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:36:25.174488  501769 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:36:25.297161  501769 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:36:25.399799  501769 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:36:25.488231  501769 kubeadm.go:400] StartCluster: {Name:no-preload-768303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-768303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:36:25.488386  501769 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:36:25.488508  501769 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:36:25.556878  501769 cri.go:89] found id: "c59a4eacffb6261c64ec4a0dd5309bb830374bf2bc988c69c85ef5c9dce0ad2f"
	I1025 10:36:25.556953  501769 cri.go:89] found id: "c1fa525274c96926fd1852026a7ae2899e382b1b5a53998cb6b5bd410772f848"
	I1025 10:36:25.556974  501769 cri.go:89] found id: "82f4a3c724831362a1c08516c0aef7bd256ae88928ed97fce85546159dfb6d88"
	I1025 10:36:25.556995  501769 cri.go:89] found id: "29ccba364a87269d3610c2a46d268a0cb3c524dfed251a515f0693f9f94692a9"
	I1025 10:36:25.557031  501769 cri.go:89] found id: ""
	I1025 10:36:25.557118  501769 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 10:36:25.616902  501769 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:36:25Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:36:25.617033  501769 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:36:25.648414  501769 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:36:25.648440  501769 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:36:25.648502  501769 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:36:25.666112  501769 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:36:25.666665  501769 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-768303" does not appear in /home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:36:25.666912  501769 kubeconfig.go:62] /home/jenkins/minikube-integration/21794-292167/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-768303" cluster setting kubeconfig missing "no-preload-768303" context setting]
	I1025 10:36:25.667443  501769 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/kubeconfig: {Name:mk3dba620aff9c31a3afd43d9db4d9ff5be75367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:36:25.668925  501769 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:36:25.696662  501769 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1025 10:36:25.696699  501769 kubeadm.go:601] duration metric: took 48.251563ms to restartPrimaryControlPlane
	I1025 10:36:25.696710  501769 kubeadm.go:402] duration metric: took 208.49079ms to StartCluster
	I1025 10:36:25.696726  501769 settings.go:142] acquiring lock: {Name:mkea67a33832f2ea491a4c745ccd836174c61655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:36:25.696786  501769 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:36:25.697688  501769 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/kubeconfig: {Name:mk3dba620aff9c31a3afd43d9db4d9ff5be75367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:36:25.697909  501769 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:36:25.698273  501769 config.go:182] Loaded profile config "no-preload-768303": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:36:25.698330  501769 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:36:25.698405  501769 addons.go:69] Setting storage-provisioner=true in profile "no-preload-768303"
	I1025 10:36:25.698421  501769 addons.go:238] Setting addon storage-provisioner=true in "no-preload-768303"
	I1025 10:36:25.698419  501769 addons.go:69] Setting dashboard=true in profile "no-preload-768303"
	I1025 10:36:25.698433  501769 addons.go:69] Setting default-storageclass=true in profile "no-preload-768303"
	I1025 10:36:25.698441  501769 addons.go:238] Setting addon dashboard=true in "no-preload-768303"
	I1025 10:36:25.698444  501769 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-768303"
	W1025 10:36:25.698449  501769 addons.go:247] addon dashboard should already be in state true
	I1025 10:36:25.698480  501769 host.go:66] Checking if "no-preload-768303" exists ...
	I1025 10:36:25.698755  501769 cli_runner.go:164] Run: docker container inspect no-preload-768303 --format={{.State.Status}}
	I1025 10:36:25.698901  501769 cli_runner.go:164] Run: docker container inspect no-preload-768303 --format={{.State.Status}}
	W1025 10:36:25.698427  501769 addons.go:247] addon storage-provisioner should already be in state true
	I1025 10:36:25.699263  501769 host.go:66] Checking if "no-preload-768303" exists ...
	I1025 10:36:25.699685  501769 cli_runner.go:164] Run: docker container inspect no-preload-768303 --format={{.State.Status}}
	I1025 10:36:25.706152  501769 out.go:179] * Verifying Kubernetes components...
	I1025 10:36:25.709213  501769 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:36:25.760041  501769 addons.go:238] Setting addon default-storageclass=true in "no-preload-768303"
	W1025 10:36:25.760065  501769 addons.go:247] addon default-storageclass should already be in state true
	I1025 10:36:25.760091  501769 host.go:66] Checking if "no-preload-768303" exists ...
	I1025 10:36:25.760558  501769 cli_runner.go:164] Run: docker container inspect no-preload-768303 --format={{.State.Status}}
	I1025 10:36:25.761786  501769 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 10:36:25.761867  501769 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:36:25.765123  501769 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	
	
	==> CRI-O <==
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.098020326Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.111539876Z" level=info msg="Running pod sandbox: kube-system/kindnet-p6hkm/POD" id=9c3a1a2a-ba75-4e43-91e4-769725dec9c2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.111625883Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.127952304Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=9c3a1a2a-ba75-4e43-91e4-769725dec9c2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.13072839Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=95da1eda-1605-449d-9774-b8ba30f3fb6c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.152724653Z" level=info msg="Ran pod sandbox 2e91e5bb075804c4b3adc6d37b01c5cb74a746439edde6ac2f3bf2f61e2bb46e with infra container: kube-system/kube-proxy-vwhfz/POD" id=95da1eda-1605-449d-9774-b8ba30f3fb6c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.160646953Z" level=info msg="Ran pod sandbox 4698b82d58590f8def5ed8ccdcdf9c5cc79e0634a49311f8ba6d52711314dd49 with infra container: kube-system/kindnet-p6hkm/POD" id=9c3a1a2a-ba75-4e43-91e4-769725dec9c2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.171229636Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=293d7566-6f0f-4143-9c4b-97e588a87de2 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.171655817Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=c325d352-506a-475c-a41e-9914795d3f5f name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.180815158Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=46c39a8b-b6c9-4378-a83f-a44d90454463 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.181431455Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=4b47ca6a-89eb-4f50-9862-4f4f87e33487 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.18337199Z" level=info msg="Creating container: kube-system/kube-proxy-vwhfz/kube-proxy" id=286d71d4-5cb3-404f-a066-8630d958a073 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.183639293Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.187117798Z" level=info msg="Creating container: kube-system/kindnet-p6hkm/kindnet-cni" id=660f946d-1cc5-4ad4-8d72-4097607b0fe6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.187718611Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.209108137Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.217700503Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.219094871Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.219347913Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.323716684Z" level=info msg="Created container d0616de1d50d51154f548a875e8a8cffa135066c03d6408a39885ddcfbe06b68: kube-system/kube-proxy-vwhfz/kube-proxy" id=286d71d4-5cb3-404f-a066-8630d958a073 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.324875989Z" level=info msg="Starting container: d0616de1d50d51154f548a875e8a8cffa135066c03d6408a39885ddcfbe06b68" id=7e30feac-28a3-4fb0-95a0-91788fa6c06b name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.331369876Z" level=info msg="Created container 0f77a6ef9d8561d1a3feea16a2c68c896acfb75d20a246ada4bd8351e9b52cb9: kube-system/kindnet-p6hkm/kindnet-cni" id=660f946d-1cc5-4ad4-8d72-4097607b0fe6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.334984449Z" level=info msg="Starting container: 0f77a6ef9d8561d1a3feea16a2c68c896acfb75d20a246ada4bd8351e9b52cb9" id=3cb21438-58d1-4c4b-bef8-896a0e294be3 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.336358953Z" level=info msg="Started container" PID=1056 containerID=d0616de1d50d51154f548a875e8a8cffa135066c03d6408a39885ddcfbe06b68 description=kube-system/kube-proxy-vwhfz/kube-proxy id=7e30feac-28a3-4fb0-95a0-91788fa6c06b name=/runtime.v1.RuntimeService/StartContainer sandboxID=2e91e5bb075804c4b3adc6d37b01c5cb74a746439edde6ac2f3bf2f61e2bb46e
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.356067852Z" level=info msg="Started container" PID=1059 containerID=0f77a6ef9d8561d1a3feea16a2c68c896acfb75d20a246ada4bd8351e9b52cb9 description=kube-system/kindnet-p6hkm/kindnet-cni id=3cb21438-58d1-4c4b-bef8-896a0e294be3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4698b82d58590f8def5ed8ccdcdf9c5cc79e0634a49311f8ba6d52711314dd49
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	0f77a6ef9d856       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   6 seconds ago       Running             kindnet-cni               1                   4698b82d58590       kindnet-p6hkm                               kube-system
	d0616de1d50d5       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   6 seconds ago       Running             kube-proxy                1                   2e91e5bb07580       kube-proxy-vwhfz                            kube-system
	9e718b8f8ba60       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   12 seconds ago      Running             kube-apiserver            1                   90ef2d1a416d7       kube-apiserver-newest-cni-491554            kube-system
	1eaae55e80cdb       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   12 seconds ago      Running             etcd                      1                   82f12e9a6b7ae       etcd-newest-cni-491554                      kube-system
	43ac3c147f037       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   12 seconds ago      Running             kube-scheduler            1                   08af93849303c       kube-scheduler-newest-cni-491554            kube-system
	c658171399887       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   12 seconds ago      Running             kube-controller-manager   1                   ef92cf161efa0       kube-controller-manager-newest-cni-491554   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-491554
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-491554
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=newest-cni-491554
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_35_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:35:51 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-491554
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:36:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:36:21 +0000   Sat, 25 Oct 2025 10:35:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:36:21 +0000   Sat, 25 Oct 2025 10:35:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:36:21 +0000   Sat, 25 Oct 2025 10:35:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 25 Oct 2025 10:36:21 +0000   Sat, 25 Oct 2025 10:35:46 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-491554
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                0f4f685b-8865-430c-806a-9e13f4963eb6
	  Boot ID:                    3729fb0b-b441-4078-a4f7-ae0fb40e9fa4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-491554                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         35s
	  kube-system                 kindnet-p6hkm                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-newest-cni-491554             250m (12%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-newest-cni-491554    200m (10%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-vwhfz                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-newest-cni-491554             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 27s                kube-proxy       
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  43s (x8 over 43s)  kubelet          Node newest-cni-491554 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 43s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 43s                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    43s (x8 over 43s)  kubelet          Node newest-cni-491554 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     43s (x8 over 43s)  kubelet          Node newest-cni-491554 status is now: NodeHasSufficientPID
	  Normal   Starting                 34s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 34s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     33s                kubelet          Node newest-cni-491554 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    33s                kubelet          Node newest-cni-491554 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  33s                kubelet          Node newest-cni-491554 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           29s                node-controller  Node newest-cni-491554 event: Registered Node newest-cni-491554 in Controller
	  Normal   Starting                 14s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 14s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  13s (x8 over 14s)  kubelet          Node newest-cni-491554 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13s (x8 over 14s)  kubelet          Node newest-cni-491554 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13s (x8 over 14s)  kubelet          Node newest-cni-491554 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3s                 node-controller  Node newest-cni-491554 event: Registered Node newest-cni-491554 in Controller
	
	
	==> dmesg <==
	[ +16.057450] overlayfs: idmapped layers are currently not supported
	[Oct25 10:16] overlayfs: idmapped layers are currently not supported
	[ +24.917476] overlayfs: idmapped layers are currently not supported
	[Oct25 10:17] overlayfs: idmapped layers are currently not supported
	[ +27.290615] overlayfs: idmapped layers are currently not supported
	[Oct25 10:19] overlayfs: idmapped layers are currently not supported
	[Oct25 10:20] overlayfs: idmapped layers are currently not supported
	[Oct25 10:21] overlayfs: idmapped layers are currently not supported
	[Oct25 10:22] overlayfs: idmapped layers are currently not supported
	[ +31.000692] overlayfs: idmapped layers are currently not supported
	[Oct25 10:24] overlayfs: idmapped layers are currently not supported
	[Oct25 10:26] overlayfs: idmapped layers are currently not supported
	[Oct25 10:27] overlayfs: idmapped layers are currently not supported
	[Oct25 10:28] overlayfs: idmapped layers are currently not supported
	[ +15.077774] overlayfs: idmapped layers are currently not supported
	[Oct25 10:29] overlayfs: idmapped layers are currently not supported
	[Oct25 10:30] overlayfs: idmapped layers are currently not supported
	[Oct25 10:32] overlayfs: idmapped layers are currently not supported
	[ +37.840172] overlayfs: idmapped layers are currently not supported
	[Oct25 10:33] overlayfs: idmapped layers are currently not supported
	[Oct25 10:34] overlayfs: idmapped layers are currently not supported
	[ +38.146630] overlayfs: idmapped layers are currently not supported
	[Oct25 10:35] overlayfs: idmapped layers are currently not supported
	[Oct25 10:36] overlayfs: idmapped layers are currently not supported
	[  +9.574283] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1eaae55e80cdb8992ea455d72ca1c72e2e47dcca465cf374974d0a5c67ef53d1] <==
	{"level":"warn","ts":"2025-10-25T10:36:18.881542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:18.907087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:18.944287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:18.976995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.015133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.026549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.060625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.086785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.111667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.128202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.176901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.197703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.219071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.265213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.329133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.371495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.423863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.468395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.523830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.595471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.697052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.749566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.791476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.839290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.959326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48886","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:36:28 up  2:18,  0 user,  load average: 5.38, 4.06, 3.36
	Linux newest-cni-491554 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0f77a6ef9d8561d1a3feea16a2c68c896acfb75d20a246ada4bd8351e9b52cb9] <==
	I1025 10:36:22.477066       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:36:22.477386       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1025 10:36:22.488327       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:36:22.488843       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:36:22.488889       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:36:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:36:22.695464       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:36:22.695490       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:36:22.695499       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:36:22.696142       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [9e718b8f8ba6066b9d7651a7a4a242af8577e67cbd7d1100a28b9bd9b2641ae3] <==
	I1025 10:36:21.254223       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1025 10:36:21.263921       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1025 10:36:21.277266       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 10:36:21.288040       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1025 10:36:21.288066       1 policy_source.go:240] refreshing policies
	I1025 10:36:21.313712       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 10:36:21.313805       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 10:36:21.314001       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 10:36:21.314122       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 10:36:21.320219       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 10:36:21.330012       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 10:36:21.355401       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:36:21.364175       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1025 10:36:21.403196       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 10:36:21.858793       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:36:21.903026       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:36:22.411557       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 10:36:22.777518       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:36:22.875339       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:36:22.913879       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:36:23.096254       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.89.216"}
	I1025 10:36:23.146861       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.228.238"}
	I1025 10:36:25.527322       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:36:25.777449       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:36:25.881649       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [c658171399887c15913e36084cee18927761f14c5dd924e3f041af77c1d24c17] <==
	I1025 10:36:25.397026       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 10:36:25.398085       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 10:36:25.398096       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 10:36:25.405826       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 10:36:25.405979       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 10:36:25.423291       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 10:36:25.425856       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 10:36:25.425895       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:36:25.426084       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 10:36:25.426123       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1025 10:36:25.426132       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 10:36:25.427265       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 10:36:25.428054       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1025 10:36:25.428114       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1025 10:36:25.428137       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1025 10:36:25.428143       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1025 10:36:25.428143       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 10:36:25.428149       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1025 10:36:25.435966       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 10:36:25.459257       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 10:36:25.462889       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:36:25.479644       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:36:25.479678       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 10:36:25.479688       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 10:36:25.524308       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [d0616de1d50d51154f548a875e8a8cffa135066c03d6408a39885ddcfbe06b68] <==
	I1025 10:36:22.522220       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:36:22.706536       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:36:22.921939       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:36:22.921987       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1025 10:36:22.922071       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:36:23.136927       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:36:23.137077       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:36:23.178575       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:36:23.178975       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:36:23.179193       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:36:23.181010       1 config.go:200] "Starting service config controller"
	I1025 10:36:23.181081       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:36:23.181125       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:36:23.181179       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:36:23.181242       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:36:23.181272       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:36:23.183834       1 config.go:309] "Starting node config controller"
	I1025 10:36:23.183925       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:36:23.183976       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:36:23.281535       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 10:36:23.281689       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 10:36:23.281720       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [43ac3c147f037112025c563e7da9ea849a0ea4d6b3c7dd03d13c8c0056a0c719] <==
	I1025 10:36:18.728227       1 serving.go:386] Generated self-signed cert in-memory
	W1025 10:36:21.239537       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 10:36:21.239577       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 10:36:21.239588       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 10:36:21.239595       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 10:36:21.325037       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 10:36:21.325063       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:36:21.336186       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 10:36:21.336664       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:36:21.354894       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:36:21.336691       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 10:36:21.455639       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:36:18 newest-cni-491554 kubelet[725]: E1025 10:36:18.731710     725 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-491554\" not found" node="newest-cni-491554"
	Oct 25 10:36:20 newest-cni-491554 kubelet[725]: I1025 10:36:20.996999     725 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-491554"
	Oct 25 10:36:21 newest-cni-491554 kubelet[725]: E1025 10:36:21.425936     725 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-491554\" already exists" pod="kube-system/etcd-newest-cni-491554"
	Oct 25 10:36:21 newest-cni-491554 kubelet[725]: I1025 10:36:21.425982     725 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-491554"
	Oct 25 10:36:21 newest-cni-491554 kubelet[725]: I1025 10:36:21.440272     725 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-491554"
	Oct 25 10:36:21 newest-cni-491554 kubelet[725]: I1025 10:36:21.440397     725 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-491554"
	Oct 25 10:36:21 newest-cni-491554 kubelet[725]: I1025 10:36:21.440437     725 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 25 10:36:21 newest-cni-491554 kubelet[725]: I1025 10:36:21.441602     725 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 25 10:36:21 newest-cni-491554 kubelet[725]: E1025 10:36:21.447999     725 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-491554\" already exists" pod="kube-system/kube-apiserver-newest-cni-491554"
	Oct 25 10:36:21 newest-cni-491554 kubelet[725]: I1025 10:36:21.448174     725 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-491554"
	Oct 25 10:36:21 newest-cni-491554 kubelet[725]: E1025 10:36:21.479068     725 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-491554\" already exists" pod="kube-system/kube-controller-manager-newest-cni-491554"
	Oct 25 10:36:21 newest-cni-491554 kubelet[725]: I1025 10:36:21.484009     725 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-491554"
	Oct 25 10:36:21 newest-cni-491554 kubelet[725]: E1025 10:36:21.514026     725 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-491554\" already exists" pod="kube-system/kube-scheduler-newest-cni-491554"
	Oct 25 10:36:21 newest-cni-491554 kubelet[725]: I1025 10:36:21.777915     725 apiserver.go:52] "Watching apiserver"
	Oct 25 10:36:21 newest-cni-491554 kubelet[725]: I1025 10:36:21.796363     725 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 25 10:36:21 newest-cni-491554 kubelet[725]: I1025 10:36:21.882486     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/151013b4-f8bd-444f-b983-7fd1136a2003-xtables-lock\") pod \"kube-proxy-vwhfz\" (UID: \"151013b4-f8bd-444f-b983-7fd1136a2003\") " pod="kube-system/kube-proxy-vwhfz"
	Oct 25 10:36:21 newest-cni-491554 kubelet[725]: I1025 10:36:21.882548     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b1e90261-b931-4949-be7a-bb6e26597d55-cni-cfg\") pod \"kindnet-p6hkm\" (UID: \"b1e90261-b931-4949-be7a-bb6e26597d55\") " pod="kube-system/kindnet-p6hkm"
	Oct 25 10:36:21 newest-cni-491554 kubelet[725]: I1025 10:36:21.882589     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b1e90261-b931-4949-be7a-bb6e26597d55-lib-modules\") pod \"kindnet-p6hkm\" (UID: \"b1e90261-b931-4949-be7a-bb6e26597d55\") " pod="kube-system/kindnet-p6hkm"
	Oct 25 10:36:21 newest-cni-491554 kubelet[725]: I1025 10:36:21.882609     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/151013b4-f8bd-444f-b983-7fd1136a2003-lib-modules\") pod \"kube-proxy-vwhfz\" (UID: \"151013b4-f8bd-444f-b983-7fd1136a2003\") " pod="kube-system/kube-proxy-vwhfz"
	Oct 25 10:36:21 newest-cni-491554 kubelet[725]: I1025 10:36:21.882626     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b1e90261-b931-4949-be7a-bb6e26597d55-xtables-lock\") pod \"kindnet-p6hkm\" (UID: \"b1e90261-b931-4949-be7a-bb6e26597d55\") " pod="kube-system/kindnet-p6hkm"
	Oct 25 10:36:21 newest-cni-491554 kubelet[725]: I1025 10:36:21.939174     725 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 25 10:36:22 newest-cni-491554 kubelet[725]: W1025 10:36:22.159035     725 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3a1d576c3602867853477f035a9ea60cae7d92cb1d6d6f0519a5f74f9b275216/crio-4698b82d58590f8def5ed8ccdcdf9c5cc79e0634a49311f8ba6d52711314dd49 WatchSource:0}: Error finding container 4698b82d58590f8def5ed8ccdcdf9c5cc79e0634a49311f8ba6d52711314dd49: Status 404 returned error can't find the container with id 4698b82d58590f8def5ed8ccdcdf9c5cc79e0634a49311f8ba6d52711314dd49
	Oct 25 10:36:24 newest-cni-491554 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 10:36:24 newest-cni-491554 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 10:36:24 newest-cni-491554 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-491554 -n newest-cni-491554
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-491554 -n newest-cni-491554: exit status 2 (514.966218ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-491554 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-zxmft storage-provisioner dashboard-metrics-scraper-6ffb444bf9-6fngc kubernetes-dashboard-855c9754f9-4jtcj
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-491554 describe pod coredns-66bc5c9577-zxmft storage-provisioner dashboard-metrics-scraper-6ffb444bf9-6fngc kubernetes-dashboard-855c9754f9-4jtcj
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-491554 describe pod coredns-66bc5c9577-zxmft storage-provisioner dashboard-metrics-scraper-6ffb444bf9-6fngc kubernetes-dashboard-855c9754f9-4jtcj: exit status 1 (154.937477ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-zxmft" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-6fngc" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-4jtcj" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-491554 describe pod coredns-66bc5c9577-zxmft storage-provisioner dashboard-metrics-scraper-6ffb444bf9-6fngc kubernetes-dashboard-855c9754f9-4jtcj: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-491554
helpers_test.go:243: (dbg) docker inspect newest-cni-491554:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3a1d576c3602867853477f035a9ea60cae7d92cb1d6d6f0519a5f74f9b275216",
	        "Created": "2025-10-25T10:35:24.032490574Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 500595,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:36:07.362157541Z",
	            "FinishedAt": "2025-10-25T10:36:06.425919239Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/3a1d576c3602867853477f035a9ea60cae7d92cb1d6d6f0519a5f74f9b275216/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3a1d576c3602867853477f035a9ea60cae7d92cb1d6d6f0519a5f74f9b275216/hostname",
	        "HostsPath": "/var/lib/docker/containers/3a1d576c3602867853477f035a9ea60cae7d92cb1d6d6f0519a5f74f9b275216/hosts",
	        "LogPath": "/var/lib/docker/containers/3a1d576c3602867853477f035a9ea60cae7d92cb1d6d6f0519a5f74f9b275216/3a1d576c3602867853477f035a9ea60cae7d92cb1d6d6f0519a5f74f9b275216-json.log",
	        "Name": "/newest-cni-491554",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-491554:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-491554",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3a1d576c3602867853477f035a9ea60cae7d92cb1d6d6f0519a5f74f9b275216",
	                "LowerDir": "/var/lib/docker/overlay2/de00d36f59cd60c2fcb113e5a127cd5718597e1a30c7904107c9cf639dfd903c-init/diff:/var/lib/docker/overlay2/56244b4f60319ba406f5ebe406da3f45bd5764c167111669ae2d84794cf94518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/de00d36f59cd60c2fcb113e5a127cd5718597e1a30c7904107c9cf639dfd903c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/de00d36f59cd60c2fcb113e5a127cd5718597e1a30c7904107c9cf639dfd903c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/de00d36f59cd60c2fcb113e5a127cd5718597e1a30c7904107c9cf639dfd903c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-491554",
	                "Source": "/var/lib/docker/volumes/newest-cni-491554/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-491554",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-491554",
	                "name.minikube.sigs.k8s.io": "newest-cni-491554",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "67e480e6479009b235933eefd7fd181bbb525464bbd8b13f0216d777eab3ccf5",
	            "SandboxKey": "/var/run/docker/netns/67e480e64790",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-491554": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:50:64:39:5c:14",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f83aa7d97dd61a3e183e8b61de27687f028a404822311667002b081cafdf7acf",
	                    "EndpointID": "3ef1a9617c1a669de2711d96649092db9b712fb33ac403523906b430415e9636",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-491554",
	                        "3a1d576c3602"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-491554 -n newest-cni-491554
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-491554 -n newest-cni-491554: exit status 2 (565.661459ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-491554 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-491554 logs -n 25: (1.767780719s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-419185 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │                     │
	│ stop    │ -p embed-certs-419185 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:33 UTC │ 25 Oct 25 10:34 UTC │
	│ addons  │ enable dashboard -p embed-certs-419185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ start   │ -p embed-certs-419185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ image   │ default-k8s-diff-port-204074 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ pause   │ -p default-k8s-diff-port-204074 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-204074                                                                                                                                                                                                               │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ delete  │ -p default-k8s-diff-port-204074                                                                                                                                                                                                               │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ delete  │ -p disable-driver-mounts-533631                                                                                                                                                                                                               │ disable-driver-mounts-533631 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ start   │ -p no-preload-768303 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-768303            │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:35 UTC │
	│ image   │ embed-certs-419185 image list --format=json                                                                                                                                                                                                   │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │ 25 Oct 25 10:35 UTC │
	│ pause   │ -p embed-certs-419185 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │                     │
	│ delete  │ -p embed-certs-419185                                                                                                                                                                                                                         │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │ 25 Oct 25 10:35 UTC │
	│ delete  │ -p embed-certs-419185                                                                                                                                                                                                                         │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │ 25 Oct 25 10:35 UTC │
	│ start   │ -p newest-cni-491554 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-491554            │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │ 25 Oct 25 10:36 UTC │
	│ addons  │ enable metrics-server -p no-preload-768303 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-768303            │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │                     │
	│ stop    │ -p no-preload-768303 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-768303            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │ 25 Oct 25 10:36 UTC │
	│ addons  │ enable metrics-server -p newest-cni-491554 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-491554            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │                     │
	│ stop    │ -p newest-cni-491554 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-491554            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │ 25 Oct 25 10:36 UTC │
	│ addons  │ enable dashboard -p newest-cni-491554 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-491554            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │ 25 Oct 25 10:36 UTC │
	│ start   │ -p newest-cni-491554 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-491554            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │ 25 Oct 25 10:36 UTC │
	│ addons  │ enable dashboard -p no-preload-768303 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-768303            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │ 25 Oct 25 10:36 UTC │
	│ start   │ -p no-preload-768303 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-768303            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │                     │
	│ image   │ newest-cni-491554 image list --format=json                                                                                                                                                                                                    │ newest-cni-491554            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │ 25 Oct 25 10:36 UTC │
	│ pause   │ -p newest-cni-491554 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-491554            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:36:15
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:36:15.134205  501769 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:36:15.134467  501769 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:36:15.134490  501769 out.go:374] Setting ErrFile to fd 2...
	I1025 10:36:15.134509  501769 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:36:15.134813  501769 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 10:36:15.135292  501769 out.go:368] Setting JSON to false
	I1025 10:36:15.136264  501769 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8325,"bootTime":1761380250,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1025 10:36:15.136361  501769 start.go:141] virtualization:  
	I1025 10:36:15.141508  501769 out.go:179] * [no-preload-768303] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:36:15.144767  501769 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 10:36:15.144847  501769 notify.go:220] Checking for updates...
	I1025 10:36:15.151205  501769 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:36:15.154400  501769 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:36:15.157513  501769 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-292167/.minikube
	I1025 10:36:15.161221  501769 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:36:15.164435  501769 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:36:15.167958  501769 config.go:182] Loaded profile config "no-preload-768303": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:36:15.168648  501769 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:36:15.216762  501769 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:36:15.216884  501769 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:36:15.317007  501769 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-25 10:36:15.306598812 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:36:15.317115  501769 docker.go:318] overlay module found
	I1025 10:36:15.320375  501769 out.go:179] * Using the docker driver based on existing profile
	I1025 10:36:15.323812  501769 start.go:305] selected driver: docker
	I1025 10:36:15.323840  501769 start.go:925] validating driver "docker" against &{Name:no-preload-768303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-768303 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:36:15.323944  501769 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:36:15.324700  501769 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:36:15.416031  501769 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-25 10:36:15.404078968 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:36:15.416360  501769 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:36:15.416379  501769 cni.go:84] Creating CNI manager for ""
	I1025 10:36:15.416441  501769 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:36:15.416485  501769 start.go:349] cluster config:
	{Name:no-preload-768303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-768303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:36:15.420486  501769 out.go:179] * Starting "no-preload-768303" primary control-plane node in "no-preload-768303" cluster
	I1025 10:36:15.424456  501769 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:36:15.427837  501769 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:36:15.431315  501769 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:36:15.431446  501769 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/config.json ...
	I1025 10:36:15.431698  501769 cache.go:107] acquiring lock: {Name:mkcb674bf6bbc265e760bf8be116a57186608a58 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:36:15.431767  501769 cache.go:115] /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1025 10:36:15.431775  501769 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 87.214µs
	I1025 10:36:15.431784  501769 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1025 10:36:15.431795  501769 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:36:15.432007  501769 cache.go:107] acquiring lock: {Name:mkb1799d37a5611969ac9809065db3c631238657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:36:15.432074  501769 cache.go:115] /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1025 10:36:15.432083  501769 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 82.479µs
	I1025 10:36:15.432090  501769 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1025 10:36:15.432117  501769 cache.go:107] acquiring lock: {Name:mk9facf4e59193f96d96012cf82ef7fef364093d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:36:15.432158  501769 cache.go:115] /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1025 10:36:15.432163  501769 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 49.01µs
	I1025 10:36:15.432170  501769 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1025 10:36:15.432180  501769 cache.go:107] acquiring lock: {Name:mk1e264701efd819526cb1327aac37ba6383079c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:36:15.432207  501769 cache.go:115] /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1025 10:36:15.432212  501769 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 33.297µs
	I1025 10:36:15.432218  501769 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1025 10:36:15.432227  501769 cache.go:107] acquiring lock: {Name:mk145e03dafbcb30f74a27f99b5fba1addf06371 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:36:15.432252  501769 cache.go:115] /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1025 10:36:15.432256  501769 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 30.893µs
	I1025 10:36:15.432262  501769 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1025 10:36:15.432272  501769 cache.go:107] acquiring lock: {Name:mk92a2a5fb8dde9e51922a55162996cccaaf10a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:36:15.432303  501769 cache.go:115] /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1025 10:36:15.432309  501769 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 37.654µs
	I1025 10:36:15.432314  501769 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1025 10:36:15.432331  501769 cache.go:107] acquiring lock: {Name:mkd43195497e2780982a3de630a4cda8f1c812f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:36:15.432357  501769 cache.go:115] /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1025 10:36:15.432362  501769 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 33.182µs
	I1025 10:36:15.432368  501769 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1025 10:36:15.432377  501769 cache.go:107] acquiring lock: {Name:mk2866f59a9236262f732426434fc9bafb724b61 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:36:15.432405  501769 cache.go:115] /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1025 10:36:15.432409  501769 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 33.403µs
	I1025 10:36:15.432414  501769 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21794-292167/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1025 10:36:15.432420  501769 cache.go:87] Successfully saved all images to host disk.
	I1025 10:36:15.457514  501769 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:36:15.457535  501769 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:36:15.457547  501769 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:36:15.457575  501769 start.go:360] acquireMachinesLock for no-preload-768303: {Name:mkf575e11dd83318b723f79e28f313be28102c7c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:36:15.457624  501769 start.go:364] duration metric: took 33.568µs to acquireMachinesLock for "no-preload-768303"
	I1025 10:36:15.457641  501769 start.go:96] Skipping create...Using existing machine configuration
	I1025 10:36:15.457648  501769 fix.go:54] fixHost starting: 
	I1025 10:36:15.457895  501769 cli_runner.go:164] Run: docker container inspect no-preload-768303 --format={{.State.Status}}
	I1025 10:36:15.490005  501769 fix.go:112] recreateIfNeeded on no-preload-768303: state=Stopped err=<nil>
	W1025 10:36:15.490033  501769 fix.go:138] unexpected machine state, will restart: <nil>
	I1025 10:36:14.186772  500465 kubeadm.go:883] updating cluster {Name:newest-cni-491554 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-491554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:36:14.186899  500465 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:36:14.186980  500465 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:36:14.221841  500465 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:36:14.221866  500465 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:36:14.221930  500465 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:36:14.251629  500465 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:36:14.251652  500465 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:36:14.251660  500465 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1025 10:36:14.251754  500465 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-491554 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-491554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:36:14.251842  500465 ssh_runner.go:195] Run: crio config
	I1025 10:36:14.321848  500465 cni.go:84] Creating CNI manager for ""
	I1025 10:36:14.321934  500465 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:36:14.321968  500465 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1025 10:36:14.322031  500465 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-491554 NodeName:newest-cni-491554 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:36:14.322250  500465 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-491554"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:36:14.322369  500465 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:36:14.332729  500465 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:36:14.332812  500465 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:36:14.340214  500465 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1025 10:36:14.354329  500465 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:36:14.379506  500465 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1025 10:36:14.393570  500465 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:36:14.397259  500465 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:36:14.406626  500465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:36:14.574690  500465 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:36:14.636684  500465 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554 for IP: 192.168.76.2
	I1025 10:36:14.636704  500465 certs.go:195] generating shared ca certs ...
	I1025 10:36:14.636719  500465 certs.go:227] acquiring lock for ca certs: {Name:mk9438893ca511bbb3aa6154ee7b6f94b409696d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:36:14.636878  500465 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key
	I1025 10:36:14.636927  500465 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key
	I1025 10:36:14.636935  500465 certs.go:257] generating profile certs ...
	I1025 10:36:14.637015  500465 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/client.key
	I1025 10:36:14.637064  500465 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/apiserver.key.1df2bdda
	I1025 10:36:14.637141  500465 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/proxy-client.key
	I1025 10:36:14.637253  500465 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem (1338 bytes)
	W1025 10:36:14.637287  500465 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017_empty.pem, impossibly tiny 0 bytes
	I1025 10:36:14.637297  500465 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 10:36:14.637323  500465 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem (1082 bytes)
	I1025 10:36:14.637356  500465 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:36:14.637379  500465 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem (1675 bytes)
	I1025 10:36:14.637423  500465 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 10:36:14.638029  500465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:36:14.683070  500465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:36:14.721261  500465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:36:14.758069  500465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:36:14.808648  500465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 10:36:14.836206  500465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1025 10:36:14.861029  500465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:36:14.899389  500465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/newest-cni-491554/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:36:14.943287  500465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem --> /usr/share/ca-certificates/294017.pem (1338 bytes)
	I1025 10:36:14.965867  500465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /usr/share/ca-certificates/2940172.pem (1708 bytes)
	I1025 10:36:14.990341  500465 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:36:15.027736  500465 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:36:15.050920  500465 ssh_runner.go:195] Run: openssl version
	I1025 10:36:15.060709  500465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:36:15.088600  500465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:36:15.095196  500465 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:36:15.095320  500465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:36:15.150349  500465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:36:15.160882  500465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294017.pem && ln -fs /usr/share/ca-certificates/294017.pem /etc/ssl/certs/294017.pem"
	I1025 10:36:15.172622  500465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294017.pem
	I1025 10:36:15.180280  500465 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:39 /usr/share/ca-certificates/294017.pem
	I1025 10:36:15.180415  500465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294017.pem
	I1025 10:36:15.235432  500465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294017.pem /etc/ssl/certs/51391683.0"
	I1025 10:36:15.247140  500465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940172.pem && ln -fs /usr/share/ca-certificates/2940172.pem /etc/ssl/certs/2940172.pem"
	I1025 10:36:15.262015  500465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940172.pem
	I1025 10:36:15.267720  500465 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:39 /usr/share/ca-certificates/2940172.pem
	I1025 10:36:15.267859  500465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940172.pem
	I1025 10:36:15.312882  500465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940172.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:36:15.325327  500465 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:36:15.330842  500465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:36:15.377296  500465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:36:15.431072  500465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:36:15.527602  500465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:36:15.618846  500465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:36:15.817691  500465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:36:15.964228  500465 kubeadm.go:400] StartCluster: {Name:newest-cni-491554 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-491554 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:36:15.964327  500465 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:36:15.964433  500465 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:36:16.105630  500465 cri.go:89] found id: "9e718b8f8ba6066b9d7651a7a4a242af8577e67cbd7d1100a28b9bd9b2641ae3"
	I1025 10:36:16.105655  500465 cri.go:89] found id: "1eaae55e80cdb8992ea455d72ca1c72e2e47dcca465cf374974d0a5c67ef53d1"
	I1025 10:36:16.105662  500465 cri.go:89] found id: "43ac3c147f037112025c563e7da9ea849a0ea4d6b3c7dd03d13c8c0056a0c719"
	I1025 10:36:16.105666  500465 cri.go:89] found id: "c658171399887c15913e36084cee18927761f14c5dd924e3f041af77c1d24c17"
	I1025 10:36:16.105669  500465 cri.go:89] found id: ""
	I1025 10:36:16.105729  500465 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 10:36:16.152016  500465 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:36:16Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:36:16.152186  500465 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:36:16.173643  500465 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:36:16.173660  500465 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:36:16.173739  500465 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:36:16.204670  500465 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:36:16.205153  500465 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-491554" does not appear in /home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:36:16.205248  500465 kubeconfig.go:62] /home/jenkins/minikube-integration/21794-292167/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-491554" cluster setting kubeconfig missing "newest-cni-491554" context setting]
	I1025 10:36:16.205516  500465 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/kubeconfig: {Name:mk3dba620aff9c31a3afd43d9db4d9ff5be75367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:36:16.207102  500465 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:36:16.228100  500465 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1025 10:36:16.228181  500465 kubeadm.go:601] duration metric: took 54.513753ms to restartPrimaryControlPlane
	I1025 10:36:16.228205  500465 kubeadm.go:402] duration metric: took 263.987643ms to StartCluster
	I1025 10:36:16.228250  500465 settings.go:142] acquiring lock: {Name:mkea67a33832f2ea491a4c745ccd836174c61655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:36:16.228345  500465 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:36:16.229112  500465 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/kubeconfig: {Name:mk3dba620aff9c31a3afd43d9db4d9ff5be75367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:36:16.229656  500465 config.go:182] Loaded profile config "newest-cni-491554": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:36:16.229767  500465 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:36:16.229841  500465 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-491554"
	I1025 10:36:16.229865  500465 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-491554"
	W1025 10:36:16.229872  500465 addons.go:247] addon storage-provisioner should already be in state true
	I1025 10:36:16.229892  500465 host.go:66] Checking if "newest-cni-491554" exists ...
	I1025 10:36:16.230372  500465 cli_runner.go:164] Run: docker container inspect newest-cni-491554 --format={{.State.Status}}
	I1025 10:36:16.229743  500465 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:36:16.230841  500465 addons.go:69] Setting dashboard=true in profile "newest-cni-491554"
	I1025 10:36:16.231316  500465 addons.go:238] Setting addon dashboard=true in "newest-cni-491554"
	W1025 10:36:16.231342  500465 addons.go:247] addon dashboard should already be in state true
	I1025 10:36:16.231392  500465 host.go:66] Checking if "newest-cni-491554" exists ...
	I1025 10:36:16.232092  500465 cli_runner.go:164] Run: docker container inspect newest-cni-491554 --format={{.State.Status}}
	I1025 10:36:16.230855  500465 addons.go:69] Setting default-storageclass=true in profile "newest-cni-491554"
	I1025 10:36:16.238729  500465 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-491554"
	I1025 10:36:16.238763  500465 out.go:179] * Verifying Kubernetes components...
	I1025 10:36:16.243426  500465 cli_runner.go:164] Run: docker container inspect newest-cni-491554 --format={{.State.Status}}
	I1025 10:36:16.247261  500465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:36:16.306653  500465 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:36:16.309814  500465 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:36:16.309839  500465 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:36:16.309912  500465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-491554
	I1025 10:36:16.324113  500465 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 10:36:16.327138  500465 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1025 10:36:16.327945  500465 addons.go:238] Setting addon default-storageclass=true in "newest-cni-491554"
	W1025 10:36:16.327962  500465 addons.go:247] addon default-storageclass should already be in state true
	I1025 10:36:16.327987  500465 host.go:66] Checking if "newest-cni-491554" exists ...
	I1025 10:36:16.328393  500465 cli_runner.go:164] Run: docker container inspect newest-cni-491554 --format={{.State.Status}}
	I1025 10:36:16.330146  500465 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 10:36:16.330166  500465 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 10:36:16.330245  500465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-491554
	I1025 10:36:16.355340  500465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33462 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/newest-cni-491554/id_rsa Username:docker}
	I1025 10:36:16.394125  500465 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:36:16.394148  500465 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:36:16.394223  500465 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-491554
	I1025 10:36:16.395868  500465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33462 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/newest-cni-491554/id_rsa Username:docker}
	I1025 10:36:16.431623  500465 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33462 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/newest-cni-491554/id_rsa Username:docker}
	I1025 10:36:16.746212  500465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:36:16.787893  500465 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:36:16.874873  500465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:36:16.906769  500465 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 10:36:16.906790  500465 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 10:36:16.994116  500465 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 10:36:16.994150  500465 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 10:36:15.495321  501769 out.go:252] * Restarting existing docker container for "no-preload-768303" ...
	I1025 10:36:15.495401  501769 cli_runner.go:164] Run: docker start no-preload-768303
	I1025 10:36:15.864718  501769 cli_runner.go:164] Run: docker container inspect no-preload-768303 --format={{.State.Status}}
	I1025 10:36:15.895881  501769 kic.go:430] container "no-preload-768303" state is running.
	I1025 10:36:15.896686  501769 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-768303
	I1025 10:36:15.922299  501769 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/config.json ...
	I1025 10:36:15.922526  501769 machine.go:93] provisionDockerMachine start ...
	I1025 10:36:15.922584  501769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:36:15.951361  501769 main.go:141] libmachine: Using SSH client type: native
	I1025 10:36:15.951751  501769 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33467 <nil> <nil>}
	I1025 10:36:15.951762  501769 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:36:15.952562  501769 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35466->127.0.0.1:33467: read: connection reset by peer
	I1025 10:36:19.134912  501769 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-768303
	
	I1025 10:36:19.134992  501769 ubuntu.go:182] provisioning hostname "no-preload-768303"
	I1025 10:36:19.135100  501769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:36:19.167786  501769 main.go:141] libmachine: Using SSH client type: native
	I1025 10:36:19.168102  501769 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33467 <nil> <nil>}
	I1025 10:36:19.168113  501769 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-768303 && echo "no-preload-768303" | sudo tee /etc/hostname
	I1025 10:36:19.373484  501769 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-768303
	
	I1025 10:36:19.373575  501769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:36:19.404494  501769 main.go:141] libmachine: Using SSH client type: native
	I1025 10:36:19.404846  501769 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33467 <nil> <nil>}
	I1025 10:36:19.404868  501769 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-768303' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-768303/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-768303' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:36:19.591589  501769 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:36:19.591618  501769 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-292167/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-292167/.minikube}
	I1025 10:36:19.591649  501769 ubuntu.go:190] setting up certificates
	I1025 10:36:19.591658  501769 provision.go:84] configureAuth start
	I1025 10:36:19.591724  501769 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-768303
	I1025 10:36:19.623368  501769 provision.go:143] copyHostCerts
	I1025 10:36:19.623448  501769 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem, removing ...
	I1025 10:36:19.623470  501769 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem
	I1025 10:36:19.623556  501769 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem (1082 bytes)
	I1025 10:36:19.623701  501769 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem, removing ...
	I1025 10:36:19.623714  501769 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem
	I1025 10:36:19.623746  501769 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem (1123 bytes)
	I1025 10:36:19.623808  501769 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem, removing ...
	I1025 10:36:19.623816  501769 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem
	I1025 10:36:19.623840  501769 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem (1675 bytes)
	I1025 10:36:19.623897  501769 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem org=jenkins.no-preload-768303 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-768303]
	I1025 10:36:17.111593  500465 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 10:36:17.111616  500465 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 10:36:17.180846  500465 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 10:36:17.180870  500465 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 10:36:17.201686  500465 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 10:36:17.201710  500465 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 10:36:17.217738  500465 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 10:36:17.217763  500465 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 10:36:17.232998  500465 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 10:36:17.233028  500465 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 10:36:17.249154  500465 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 10:36:17.249177  500465 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 10:36:17.265124  500465 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:36:17.265156  500465 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 10:36:17.281717  500465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:36:23.039773  500465 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.293479161s)
	I1025 10:36:23.039836  500465 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.251924244s)
	I1025 10:36:23.039870  500465 api_server.go:52] waiting for apiserver process to appear ...
	I1025 10:36:23.039924  500465 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:36:23.039992  500465 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.165098657s)
	I1025 10:36:23.155630  500465 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.873866534s)
	I1025 10:36:23.155865  500465 api_server.go:72] duration metric: took 6.925098659s to wait for apiserver process to appear ...
	I1025 10:36:23.155906  500465 api_server.go:88] waiting for apiserver healthz status ...
	I1025 10:36:23.155928  500465 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1025 10:36:23.158644  500465 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-491554 addons enable metrics-server
	
	I1025 10:36:23.161489  500465 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1025 10:36:23.164344  500465 addons.go:514] duration metric: took 6.934558755s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1025 10:36:23.165017  500465 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1025 10:36:23.166067  500465 api_server.go:141] control plane version: v1.34.1
	I1025 10:36:23.166113  500465 api_server.go:131] duration metric: took 10.197101ms to wait for apiserver health ...
	I1025 10:36:23.166137  500465 system_pods.go:43] waiting for kube-system pods to appear ...
	I1025 10:36:23.170230  500465 system_pods.go:59] 8 kube-system pods found
	I1025 10:36:23.170262  500465 system_pods.go:61] "coredns-66bc5c9577-zxmft" [c65f8d6e-61d0-4d82-b7af-b758693a7232] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1025 10:36:23.170271  500465 system_pods.go:61] "etcd-newest-cni-491554" [f243a6c8-1369-43b7-99b9-76822aea8145] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:36:23.170277  500465 system_pods.go:61] "kindnet-p6hkm" [b1e90261-b931-4949-be7a-bb6e26597d55] Running
	I1025 10:36:23.170285  500465 system_pods.go:61] "kube-apiserver-newest-cni-491554" [571dea08-1bde-4093-8f36-82f161cbd707] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:36:23.170294  500465 system_pods.go:61] "kube-controller-manager-newest-cni-491554" [c72d8953-fb33-4daa-b825-2b161239fc0e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:36:23.170300  500465 system_pods.go:61] "kube-proxy-vwhfz" [151013b4-f8bd-444f-b983-7fd1136a2003] Running
	I1025 10:36:23.170306  500465 system_pods.go:61] "kube-scheduler-newest-cni-491554" [14ab0451-bfca-4937-8ac3-892c41c89d45] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:36:23.170311  500465 system_pods.go:61] "storage-provisioner" [a653f861-3131-49d6-aa3d-4f280a2d5535] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1025 10:36:23.170317  500465 system_pods.go:74] duration metric: took 4.16325ms to wait for pod list to return data ...
	I1025 10:36:23.170324  500465 default_sa.go:34] waiting for default service account to be created ...
	I1025 10:36:23.173021  500465 default_sa.go:45] found service account: "default"
	I1025 10:36:23.173043  500465 default_sa.go:55] duration metric: took 2.713758ms for default service account to be created ...
	I1025 10:36:23.173055  500465 kubeadm.go:586] duration metric: took 6.942290333s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1025 10:36:23.173071  500465 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:36:23.176019  500465 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:36:23.176044  500465 node_conditions.go:123] node cpu capacity is 2
	I1025 10:36:23.176055  500465 node_conditions.go:105] duration metric: took 2.979879ms to run NodePressure ...
	I1025 10:36:23.176067  500465 start.go:241] waiting for startup goroutines ...
	I1025 10:36:23.176074  500465 start.go:246] waiting for cluster config update ...
	I1025 10:36:23.176086  500465 start.go:255] writing updated cluster config ...
	I1025 10:36:23.176358  500465 ssh_runner.go:195] Run: rm -f paused
	I1025 10:36:23.262559  500465 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 10:36:23.265597  500465 out.go:179] * Done! kubectl is now configured to use "newest-cni-491554" cluster and "default" namespace by default
	I1025 10:36:20.402615  501769 provision.go:177] copyRemoteCerts
	I1025 10:36:20.402678  501769 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:36:20.402722  501769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:36:20.427597  501769 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33467 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/no-preload-768303/id_rsa Username:docker}
	I1025 10:36:20.531788  501769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 10:36:20.557790  501769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1025 10:36:20.580101  501769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:36:20.605047  501769 provision.go:87] duration metric: took 1.013367052s to configureAuth
	I1025 10:36:20.605078  501769 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:36:20.605267  501769 config.go:182] Loaded profile config "no-preload-768303": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:36:20.605377  501769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:36:20.648338  501769 main.go:141] libmachine: Using SSH client type: native
	I1025 10:36:20.648657  501769 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33467 <nil> <nil>}
	I1025 10:36:20.648672  501769 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:36:21.132971  501769 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:36:21.133056  501769 machine.go:96] duration metric: took 5.210521079s to provisionDockerMachine
	I1025 10:36:21.133132  501769 start.go:293] postStartSetup for "no-preload-768303" (driver="docker")
	I1025 10:36:21.133173  501769 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:36:21.133278  501769 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:36:21.133352  501769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:36:21.161948  501769 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33467 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/no-preload-768303/id_rsa Username:docker}
	I1025 10:36:21.293509  501769 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:36:21.297662  501769 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:36:21.297694  501769 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:36:21.297707  501769 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/addons for local assets ...
	I1025 10:36:21.297760  501769 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/files for local assets ...
	I1025 10:36:21.297840  501769 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem -> 2940172.pem in /etc/ssl/certs
	I1025 10:36:21.297945  501769 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:36:21.310139  501769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 10:36:21.342071  501769 start.go:296] duration metric: took 208.905201ms for postStartSetup
	I1025 10:36:21.342155  501769 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:36:21.342199  501769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:36:21.368705  501769 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33467 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/no-preload-768303/id_rsa Username:docker}
	I1025 10:36:21.486297  501769 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:36:21.498259  501769 fix.go:56] duration metric: took 6.040603128s for fixHost
	I1025 10:36:21.498286  501769 start.go:83] releasing machines lock for "no-preload-768303", held for 6.040654337s
	I1025 10:36:21.498353  501769 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-768303
	I1025 10:36:21.555204  501769 ssh_runner.go:195] Run: cat /version.json
	I1025 10:36:21.555261  501769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:36:21.555490  501769 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:36:21.555545  501769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:36:21.590796  501769 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33467 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/no-preload-768303/id_rsa Username:docker}
	I1025 10:36:21.605143  501769 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33467 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/no-preload-768303/id_rsa Username:docker}
	I1025 10:36:21.830574  501769 ssh_runner.go:195] Run: systemctl --version
	I1025 10:36:21.838491  501769 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:36:21.919585  501769 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:36:21.927981  501769 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:36:21.928054  501769 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:36:21.937518  501769 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1025 10:36:21.937541  501769 start.go:495] detecting cgroup driver to use...
	I1025 10:36:21.937575  501769 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:36:21.937647  501769 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:36:21.968555  501769 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:36:21.988720  501769 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:36:21.988785  501769 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:36:22.011902  501769 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:36:22.041384  501769 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:36:22.236447  501769 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:36:22.447225  501769 docker.go:234] disabling docker service ...
	I1025 10:36:22.447296  501769 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:36:22.468717  501769 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:36:22.490154  501769 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:36:22.669862  501769 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:36:22.843706  501769 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:36:22.863903  501769 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:36:22.881980  501769 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:36:22.882084  501769 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:36:22.894301  501769 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:36:22.894406  501769 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:36:22.909110  501769 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:36:22.927140  501769 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:36:22.940477  501769 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:36:22.957224  501769 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:36:22.969570  501769 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:36:22.983321  501769 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:36:23.000680  501769 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:36:23.015979  501769 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:36:23.028088  501769 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:36:23.216577  501769 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:36:23.450492  501769 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:36:23.450571  501769 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:36:23.455624  501769 start.go:563] Will wait 60s for crictl version
	I1025 10:36:23.455705  501769 ssh_runner.go:195] Run: which crictl
	I1025 10:36:23.460123  501769 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:36:23.494481  501769 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:36:23.494574  501769 ssh_runner.go:195] Run: crio --version
	I1025 10:36:23.547353  501769 ssh_runner.go:195] Run: crio --version
	I1025 10:36:23.584542  501769 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:36:23.587636  501769 cli_runner.go:164] Run: docker network inspect no-preload-768303 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:36:23.605658  501769 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1025 10:36:23.609916  501769 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:36:23.633582  501769 kubeadm.go:883] updating cluster {Name:no-preload-768303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-768303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:36:23.633701  501769 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:36:23.633761  501769 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:36:23.685217  501769 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:36:23.685243  501769 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:36:23.685267  501769 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1025 10:36:23.685380  501769 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-768303 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-768303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:36:23.685500  501769 ssh_runner.go:195] Run: crio config
	I1025 10:36:23.790601  501769 cni.go:84] Creating CNI manager for ""
	I1025 10:36:23.790627  501769 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:36:23.790654  501769 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:36:23.790679  501769 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-768303 NodeName:no-preload-768303 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:36:23.790813  501769 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-768303"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:36:23.790888  501769 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:36:23.806255  501769 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:36:23.806337  501769 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:36:23.815583  501769 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1025 10:36:23.837862  501769 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:36:23.855842  501769 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1025 10:36:23.870950  501769 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:36:23.876361  501769 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:36:23.888081  501769 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:36:24.103930  501769 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:36:24.131955  501769 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303 for IP: 192.168.85.2
	I1025 10:36:24.131979  501769 certs.go:195] generating shared ca certs ...
	I1025 10:36:24.131996  501769 certs.go:227] acquiring lock for ca certs: {Name:mk9438893ca511bbb3aa6154ee7b6f94b409696d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:36:24.132171  501769 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key
	I1025 10:36:24.132221  501769 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key
	I1025 10:36:24.132234  501769 certs.go:257] generating profile certs ...
	I1025 10:36:24.132318  501769 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/client.key
	I1025 10:36:24.132387  501769 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/apiserver.key.a4ce95f1
	I1025 10:36:24.132428  501769 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/proxy-client.key
	I1025 10:36:24.132551  501769 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem (1338 bytes)
	W1025 10:36:24.132586  501769 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017_empty.pem, impossibly tiny 0 bytes
	I1025 10:36:24.132599  501769 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 10:36:24.132627  501769 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem (1082 bytes)
	I1025 10:36:24.132655  501769 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:36:24.132680  501769 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem (1675 bytes)
	I1025 10:36:24.132728  501769 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 10:36:24.133388  501769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:36:24.185379  501769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:36:24.248068  501769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:36:24.315354  501769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:36:24.395818  501769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1025 10:36:24.466312  501769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:36:24.539551  501769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:36:24.581328  501769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1025 10:36:24.603280  501769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /usr/share/ca-certificates/2940172.pem (1708 bytes)
	I1025 10:36:24.625708  501769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:36:24.649225  501769 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem --> /usr/share/ca-certificates/294017.pem (1338 bytes)
	I1025 10:36:24.672035  501769 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:36:24.704197  501769 ssh_runner.go:195] Run: openssl version
	I1025 10:36:24.713706  501769 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940172.pem && ln -fs /usr/share/ca-certificates/2940172.pem /etc/ssl/certs/2940172.pem"
	I1025 10:36:24.729233  501769 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940172.pem
	I1025 10:36:24.738694  501769 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:39 /usr/share/ca-certificates/2940172.pem
	I1025 10:36:24.738827  501769 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940172.pem
	I1025 10:36:24.784781  501769 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940172.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:36:24.793604  501769 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:36:24.803832  501769 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:36:24.808241  501769 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:36:24.808361  501769 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:36:24.854657  501769 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:36:24.863938  501769 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294017.pem && ln -fs /usr/share/ca-certificates/294017.pem /etc/ssl/certs/294017.pem"
	I1025 10:36:24.872605  501769 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294017.pem
	I1025 10:36:24.877461  501769 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:39 /usr/share/ca-certificates/294017.pem
	I1025 10:36:24.877528  501769 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294017.pem
	I1025 10:36:24.928847  501769 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294017.pem /etc/ssl/certs/51391683.0"
	I1025 10:36:24.940328  501769 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:36:24.945516  501769 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1025 10:36:25.025567  501769 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1025 10:36:25.104112  501769 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1025 10:36:25.174488  501769 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1025 10:36:25.297161  501769 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1025 10:36:25.399799  501769 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1025 10:36:25.488231  501769 kubeadm.go:400] StartCluster: {Name:no-preload-768303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-768303 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:36:25.488386  501769 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:36:25.488508  501769 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:36:25.556878  501769 cri.go:89] found id: "c59a4eacffb6261c64ec4a0dd5309bb830374bf2bc988c69c85ef5c9dce0ad2f"
	I1025 10:36:25.556953  501769 cri.go:89] found id: "c1fa525274c96926fd1852026a7ae2899e382b1b5a53998cb6b5bd410772f848"
	I1025 10:36:25.556974  501769 cri.go:89] found id: "82f4a3c724831362a1c08516c0aef7bd256ae88928ed97fce85546159dfb6d88"
	I1025 10:36:25.556995  501769 cri.go:89] found id: "29ccba364a87269d3610c2a46d268a0cb3c524dfed251a515f0693f9f94692a9"
	I1025 10:36:25.557031  501769 cri.go:89] found id: ""
	I1025 10:36:25.557118  501769 ssh_runner.go:195] Run: sudo runc list -f json
	W1025 10:36:25.616902  501769 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:36:25Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:36:25.617033  501769 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:36:25.648414  501769 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1025 10:36:25.648440  501769 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1025 10:36:25.648502  501769 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1025 10:36:25.666112  501769 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1025 10:36:25.666665  501769 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-768303" does not appear in /home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:36:25.666912  501769 kubeconfig.go:62] /home/jenkins/minikube-integration/21794-292167/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-768303" cluster setting kubeconfig missing "no-preload-768303" context setting]
	I1025 10:36:25.667443  501769 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/kubeconfig: {Name:mk3dba620aff9c31a3afd43d9db4d9ff5be75367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:36:25.668925  501769 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1025 10:36:25.696662  501769 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1025 10:36:25.696699  501769 kubeadm.go:601] duration metric: took 48.251563ms to restartPrimaryControlPlane
	I1025 10:36:25.696710  501769 kubeadm.go:402] duration metric: took 208.49079ms to StartCluster
	I1025 10:36:25.696726  501769 settings.go:142] acquiring lock: {Name:mkea67a33832f2ea491a4c745ccd836174c61655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:36:25.696786  501769 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:36:25.697688  501769 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/kubeconfig: {Name:mk3dba620aff9c31a3afd43d9db4d9ff5be75367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:36:25.697909  501769 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:36:25.698273  501769 config.go:182] Loaded profile config "no-preload-768303": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:36:25.698330  501769 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:36:25.698405  501769 addons.go:69] Setting storage-provisioner=true in profile "no-preload-768303"
	I1025 10:36:25.698421  501769 addons.go:238] Setting addon storage-provisioner=true in "no-preload-768303"
	I1025 10:36:25.698419  501769 addons.go:69] Setting dashboard=true in profile "no-preload-768303"
	I1025 10:36:25.698433  501769 addons.go:69] Setting default-storageclass=true in profile "no-preload-768303"
	I1025 10:36:25.698441  501769 addons.go:238] Setting addon dashboard=true in "no-preload-768303"
	I1025 10:36:25.698444  501769 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-768303"
	W1025 10:36:25.698449  501769 addons.go:247] addon dashboard should already be in state true
	I1025 10:36:25.698480  501769 host.go:66] Checking if "no-preload-768303" exists ...
	I1025 10:36:25.698755  501769 cli_runner.go:164] Run: docker container inspect no-preload-768303 --format={{.State.Status}}
	I1025 10:36:25.698901  501769 cli_runner.go:164] Run: docker container inspect no-preload-768303 --format={{.State.Status}}
	W1025 10:36:25.698427  501769 addons.go:247] addon storage-provisioner should already be in state true
	I1025 10:36:25.699263  501769 host.go:66] Checking if "no-preload-768303" exists ...
	I1025 10:36:25.699685  501769 cli_runner.go:164] Run: docker container inspect no-preload-768303 --format={{.State.Status}}
	I1025 10:36:25.706152  501769 out.go:179] * Verifying Kubernetes components...
	I1025 10:36:25.709213  501769 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:36:25.760041  501769 addons.go:238] Setting addon default-storageclass=true in "no-preload-768303"
	W1025 10:36:25.760065  501769 addons.go:247] addon default-storageclass should already be in state true
	I1025 10:36:25.760091  501769 host.go:66] Checking if "no-preload-768303" exists ...
	I1025 10:36:25.760558  501769 cli_runner.go:164] Run: docker container inspect no-preload-768303 --format={{.State.Status}}
	I1025 10:36:25.761786  501769 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1025 10:36:25.761867  501769 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:36:25.765123  501769 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1025 10:36:25.765286  501769 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:36:25.765300  501769 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:36:25.765374  501769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:36:25.767972  501769 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1025 10:36:25.768001  501769 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1025 10:36:25.768067  501769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:36:25.806856  501769 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:36:25.806881  501769 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:36:25.806944  501769 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:36:25.819702  501769 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33467 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/no-preload-768303/id_rsa Username:docker}
	I1025 10:36:25.849157  501769 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33467 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/no-preload-768303/id_rsa Username:docker}
	I1025 10:36:25.852749  501769 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33467 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/no-preload-768303/id_rsa Username:docker}
	I1025 10:36:26.179822  501769 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:36:26.203887  501769 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:36:26.241963  501769 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:36:26.245920  501769 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1025 10:36:26.245947  501769 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1025 10:36:26.372578  501769 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1025 10:36:26.372619  501769 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1025 10:36:26.465836  501769 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1025 10:36:26.465877  501769 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1025 10:36:26.491548  501769 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1025 10:36:26.491582  501769 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1025 10:36:26.526588  501769 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1025 10:36:26.526630  501769 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1025 10:36:26.563466  501769 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1025 10:36:26.563510  501769 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1025 10:36:26.594085  501769 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1025 10:36:26.594127  501769 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1025 10:36:26.647174  501769 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1025 10:36:26.647202  501769 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1025 10:36:26.674083  501769 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1025 10:36:26.674121  501769 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1025 10:36:26.712115  501769 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	
	
	==> CRI-O <==
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.098020326Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.111539876Z" level=info msg="Running pod sandbox: kube-system/kindnet-p6hkm/POD" id=9c3a1a2a-ba75-4e43-91e4-769725dec9c2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.111625883Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.127952304Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=9c3a1a2a-ba75-4e43-91e4-769725dec9c2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.13072839Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=95da1eda-1605-449d-9774-b8ba30f3fb6c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.152724653Z" level=info msg="Ran pod sandbox 2e91e5bb075804c4b3adc6d37b01c5cb74a746439edde6ac2f3bf2f61e2bb46e with infra container: kube-system/kube-proxy-vwhfz/POD" id=95da1eda-1605-449d-9774-b8ba30f3fb6c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.160646953Z" level=info msg="Ran pod sandbox 4698b82d58590f8def5ed8ccdcdf9c5cc79e0634a49311f8ba6d52711314dd49 with infra container: kube-system/kindnet-p6hkm/POD" id=9c3a1a2a-ba75-4e43-91e4-769725dec9c2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.171229636Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=293d7566-6f0f-4143-9c4b-97e588a87de2 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.171655817Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=c325d352-506a-475c-a41e-9914795d3f5f name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.180815158Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=46c39a8b-b6c9-4378-a83f-a44d90454463 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.181431455Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=4b47ca6a-89eb-4f50-9862-4f4f87e33487 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.18337199Z" level=info msg="Creating container: kube-system/kube-proxy-vwhfz/kube-proxy" id=286d71d4-5cb3-404f-a066-8630d958a073 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.183639293Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.187117798Z" level=info msg="Creating container: kube-system/kindnet-p6hkm/kindnet-cni" id=660f946d-1cc5-4ad4-8d72-4097607b0fe6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.187718611Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.209108137Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.217700503Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.219094871Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.219347913Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.323716684Z" level=info msg="Created container d0616de1d50d51154f548a875e8a8cffa135066c03d6408a39885ddcfbe06b68: kube-system/kube-proxy-vwhfz/kube-proxy" id=286d71d4-5cb3-404f-a066-8630d958a073 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.324875989Z" level=info msg="Starting container: d0616de1d50d51154f548a875e8a8cffa135066c03d6408a39885ddcfbe06b68" id=7e30feac-28a3-4fb0-95a0-91788fa6c06b name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.331369876Z" level=info msg="Created container 0f77a6ef9d8561d1a3feea16a2c68c896acfb75d20a246ada4bd8351e9b52cb9: kube-system/kindnet-p6hkm/kindnet-cni" id=660f946d-1cc5-4ad4-8d72-4097607b0fe6 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.334984449Z" level=info msg="Starting container: 0f77a6ef9d8561d1a3feea16a2c68c896acfb75d20a246ada4bd8351e9b52cb9" id=3cb21438-58d1-4c4b-bef8-896a0e294be3 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.336358953Z" level=info msg="Started container" PID=1056 containerID=d0616de1d50d51154f548a875e8a8cffa135066c03d6408a39885ddcfbe06b68 description=kube-system/kube-proxy-vwhfz/kube-proxy id=7e30feac-28a3-4fb0-95a0-91788fa6c06b name=/runtime.v1.RuntimeService/StartContainer sandboxID=2e91e5bb075804c4b3adc6d37b01c5cb74a746439edde6ac2f3bf2f61e2bb46e
	Oct 25 10:36:22 newest-cni-491554 crio[609]: time="2025-10-25T10:36:22.356067852Z" level=info msg="Started container" PID=1059 containerID=0f77a6ef9d8561d1a3feea16a2c68c896acfb75d20a246ada4bd8351e9b52cb9 description=kube-system/kindnet-p6hkm/kindnet-cni id=3cb21438-58d1-4c4b-bef8-896a0e294be3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4698b82d58590f8def5ed8ccdcdf9c5cc79e0634a49311f8ba6d52711314dd49
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	0f77a6ef9d856       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   9 seconds ago       Running             kindnet-cni               1                   4698b82d58590       kindnet-p6hkm                               kube-system
	d0616de1d50d5       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   9 seconds ago       Running             kube-proxy                1                   2e91e5bb07580       kube-proxy-vwhfz                            kube-system
	9e718b8f8ba60       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   16 seconds ago      Running             kube-apiserver            1                   90ef2d1a416d7       kube-apiserver-newest-cni-491554            kube-system
	1eaae55e80cdb       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   16 seconds ago      Running             etcd                      1                   82f12e9a6b7ae       etcd-newest-cni-491554                      kube-system
	43ac3c147f037       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   16 seconds ago      Running             kube-scheduler            1                   08af93849303c       kube-scheduler-newest-cni-491554            kube-system
	c658171399887       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   16 seconds ago      Running             kube-controller-manager   1                   ef92cf161efa0       kube-controller-manager-newest-cni-491554   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-491554
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-491554
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=newest-cni-491554
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_35_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:35:51 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-491554
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:36:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:36:21 +0000   Sat, 25 Oct 2025 10:35:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:36:21 +0000   Sat, 25 Oct 2025 10:35:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:36:21 +0000   Sat, 25 Oct 2025 10:35:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 25 Oct 2025 10:36:21 +0000   Sat, 25 Oct 2025 10:35:46 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-491554
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                0f4f685b-8865-430c-806a-9e13f4963eb6
	  Boot ID:                    3729fb0b-b441-4078-a4f7-ae0fb40e9fa4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-491554                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         38s
	  kube-system                 kindnet-p6hkm                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      32s
	  kube-system                 kube-apiserver-newest-cni-491554             250m (12%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-newest-cni-491554    200m (10%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-vwhfz                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-scheduler-newest-cni-491554             100m (5%)     0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 30s                kube-proxy       
	  Normal   Starting                 8s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  46s (x8 over 46s)  kubelet          Node newest-cni-491554 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 46s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 46s                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    46s (x8 over 46s)  kubelet          Node newest-cni-491554 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     46s (x8 over 46s)  kubelet          Node newest-cni-491554 status is now: NodeHasSufficientPID
	  Normal   Starting                 37s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 37s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     36s                kubelet          Node newest-cni-491554 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    36s                kubelet          Node newest-cni-491554 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  36s                kubelet          Node newest-cni-491554 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           32s                node-controller  Node newest-cni-491554 event: Registered Node newest-cni-491554 in Controller
	  Normal   Starting                 17s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 17s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  16s (x8 over 17s)  kubelet          Node newest-cni-491554 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16s (x8 over 17s)  kubelet          Node newest-cni-491554 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16s (x8 over 17s)  kubelet          Node newest-cni-491554 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6s                 node-controller  Node newest-cni-491554 event: Registered Node newest-cni-491554 in Controller
	
	
	==> dmesg <==
	[ +16.057450] overlayfs: idmapped layers are currently not supported
	[Oct25 10:16] overlayfs: idmapped layers are currently not supported
	[ +24.917476] overlayfs: idmapped layers are currently not supported
	[Oct25 10:17] overlayfs: idmapped layers are currently not supported
	[ +27.290615] overlayfs: idmapped layers are currently not supported
	[Oct25 10:19] overlayfs: idmapped layers are currently not supported
	[Oct25 10:20] overlayfs: idmapped layers are currently not supported
	[Oct25 10:21] overlayfs: idmapped layers are currently not supported
	[Oct25 10:22] overlayfs: idmapped layers are currently not supported
	[ +31.000692] overlayfs: idmapped layers are currently not supported
	[Oct25 10:24] overlayfs: idmapped layers are currently not supported
	[Oct25 10:26] overlayfs: idmapped layers are currently not supported
	[Oct25 10:27] overlayfs: idmapped layers are currently not supported
	[Oct25 10:28] overlayfs: idmapped layers are currently not supported
	[ +15.077774] overlayfs: idmapped layers are currently not supported
	[Oct25 10:29] overlayfs: idmapped layers are currently not supported
	[Oct25 10:30] overlayfs: idmapped layers are currently not supported
	[Oct25 10:32] overlayfs: idmapped layers are currently not supported
	[ +37.840172] overlayfs: idmapped layers are currently not supported
	[Oct25 10:33] overlayfs: idmapped layers are currently not supported
	[Oct25 10:34] overlayfs: idmapped layers are currently not supported
	[ +38.146630] overlayfs: idmapped layers are currently not supported
	[Oct25 10:35] overlayfs: idmapped layers are currently not supported
	[Oct25 10:36] overlayfs: idmapped layers are currently not supported
	[  +9.574283] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1eaae55e80cdb8992ea455d72ca1c72e2e47dcca465cf374974d0a5c67ef53d1] <==
	{"level":"warn","ts":"2025-10-25T10:36:18.881542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:18.907087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:18.944287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:18.976995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.015133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.026549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.060625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.086785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.111667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.128202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.176901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.197703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.219071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.265213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.329133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.371495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.423863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.468395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.523830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.595471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.697052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.749566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.791476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.839290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:19.959326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48886","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:36:31 up  2:19,  0 user,  load average: 5.38, 4.06, 3.36
	Linux newest-cni-491554 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0f77a6ef9d8561d1a3feea16a2c68c896acfb75d20a246ada4bd8351e9b52cb9] <==
	I1025 10:36:22.477066       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:36:22.477386       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1025 10:36:22.488327       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:36:22.488843       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:36:22.488889       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:36:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:36:22.695464       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:36:22.695490       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:36:22.695499       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:36:22.696142       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [9e718b8f8ba6066b9d7651a7a4a242af8577e67cbd7d1100a28b9bd9b2641ae3] <==
	I1025 10:36:21.254223       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1025 10:36:21.263921       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1025 10:36:21.277266       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1025 10:36:21.288040       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1025 10:36:21.288066       1 policy_source.go:240] refreshing policies
	I1025 10:36:21.313712       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1025 10:36:21.313805       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1025 10:36:21.314001       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 10:36:21.314122       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 10:36:21.320219       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1025 10:36:21.330012       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 10:36:21.355401       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:36:21.364175       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1025 10:36:21.403196       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1025 10:36:21.858793       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:36:21.903026       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:36:22.411557       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 10:36:22.777518       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:36:22.875339       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:36:22.913879       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:36:23.096254       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.89.216"}
	I1025 10:36:23.146861       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.228.238"}
	I1025 10:36:25.527322       1 controller.go:667] quota admission added evaluator for: endpoints
	I1025 10:36:25.777449       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:36:25.881649       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [c658171399887c15913e36084cee18927761f14c5dd924e3f041af77c1d24c17] <==
	I1025 10:36:25.397026       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1025 10:36:25.398085       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1025 10:36:25.398096       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 10:36:25.405826       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1025 10:36:25.405979       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1025 10:36:25.423291       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1025 10:36:25.425856       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 10:36:25.425895       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:36:25.426084       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1025 10:36:25.426123       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1025 10:36:25.426132       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1025 10:36:25.427265       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 10:36:25.428054       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1025 10:36:25.428114       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1025 10:36:25.428137       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1025 10:36:25.428143       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1025 10:36:25.428143       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 10:36:25.428149       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1025 10:36:25.435966       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 10:36:25.459257       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1025 10:36:25.462889       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:36:25.479644       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:36:25.479678       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 10:36:25.479688       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1025 10:36:25.524308       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [d0616de1d50d51154f548a875e8a8cffa135066c03d6408a39885ddcfbe06b68] <==
	I1025 10:36:22.522220       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:36:22.706536       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:36:22.921939       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:36:22.921987       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1025 10:36:22.922071       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:36:23.136927       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:36:23.137077       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:36:23.178575       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:36:23.178975       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:36:23.179193       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:36:23.181010       1 config.go:200] "Starting service config controller"
	I1025 10:36:23.181081       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:36:23.181125       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:36:23.181179       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:36:23.181242       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:36:23.181272       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:36:23.183834       1 config.go:309] "Starting node config controller"
	I1025 10:36:23.183925       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:36:23.183976       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:36:23.281535       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 10:36:23.281689       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1025 10:36:23.281720       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [43ac3c147f037112025c563e7da9ea849a0ea4d6b3c7dd03d13c8c0056a0c719] <==
	I1025 10:36:18.728227       1 serving.go:386] Generated self-signed cert in-memory
	W1025 10:36:21.239537       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1025 10:36:21.239577       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1025 10:36:21.239588       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1025 10:36:21.239595       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1025 10:36:21.325037       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 10:36:21.325063       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:36:21.336186       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 10:36:21.336664       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:36:21.354894       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:36:21.336691       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 10:36:21.455639       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:36:18 newest-cni-491554 kubelet[725]: E1025 10:36:18.731710     725 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-491554\" not found" node="newest-cni-491554"
	Oct 25 10:36:20 newest-cni-491554 kubelet[725]: I1025 10:36:20.996999     725 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-491554"
	Oct 25 10:36:21 newest-cni-491554 kubelet[725]: E1025 10:36:21.425936     725 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-491554\" already exists" pod="kube-system/etcd-newest-cni-491554"
	Oct 25 10:36:21 newest-cni-491554 kubelet[725]: I1025 10:36:21.425982     725 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-491554"
	Oct 25 10:36:21 newest-cni-491554 kubelet[725]: I1025 10:36:21.440272     725 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-491554"
	Oct 25 10:36:21 newest-cni-491554 kubelet[725]: I1025 10:36:21.440397     725 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-491554"
	Oct 25 10:36:21 newest-cni-491554 kubelet[725]: I1025 10:36:21.440437     725 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 25 10:36:21 newest-cni-491554 kubelet[725]: I1025 10:36:21.441602     725 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 25 10:36:21 newest-cni-491554 kubelet[725]: E1025 10:36:21.447999     725 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-491554\" already exists" pod="kube-system/kube-apiserver-newest-cni-491554"
	Oct 25 10:36:21 newest-cni-491554 kubelet[725]: I1025 10:36:21.448174     725 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-491554"
	Oct 25 10:36:21 newest-cni-491554 kubelet[725]: E1025 10:36:21.479068     725 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-491554\" already exists" pod="kube-system/kube-controller-manager-newest-cni-491554"
	Oct 25 10:36:21 newest-cni-491554 kubelet[725]: I1025 10:36:21.484009     725 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-491554"
	Oct 25 10:36:21 newest-cni-491554 kubelet[725]: E1025 10:36:21.514026     725 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-491554\" already exists" pod="kube-system/kube-scheduler-newest-cni-491554"
	Oct 25 10:36:21 newest-cni-491554 kubelet[725]: I1025 10:36:21.777915     725 apiserver.go:52] "Watching apiserver"
	Oct 25 10:36:21 newest-cni-491554 kubelet[725]: I1025 10:36:21.796363     725 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 25 10:36:21 newest-cni-491554 kubelet[725]: I1025 10:36:21.882486     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/151013b4-f8bd-444f-b983-7fd1136a2003-xtables-lock\") pod \"kube-proxy-vwhfz\" (UID: \"151013b4-f8bd-444f-b983-7fd1136a2003\") " pod="kube-system/kube-proxy-vwhfz"
	Oct 25 10:36:21 newest-cni-491554 kubelet[725]: I1025 10:36:21.882548     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b1e90261-b931-4949-be7a-bb6e26597d55-cni-cfg\") pod \"kindnet-p6hkm\" (UID: \"b1e90261-b931-4949-be7a-bb6e26597d55\") " pod="kube-system/kindnet-p6hkm"
	Oct 25 10:36:21 newest-cni-491554 kubelet[725]: I1025 10:36:21.882589     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b1e90261-b931-4949-be7a-bb6e26597d55-lib-modules\") pod \"kindnet-p6hkm\" (UID: \"b1e90261-b931-4949-be7a-bb6e26597d55\") " pod="kube-system/kindnet-p6hkm"
	Oct 25 10:36:21 newest-cni-491554 kubelet[725]: I1025 10:36:21.882609     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/151013b4-f8bd-444f-b983-7fd1136a2003-lib-modules\") pod \"kube-proxy-vwhfz\" (UID: \"151013b4-f8bd-444f-b983-7fd1136a2003\") " pod="kube-system/kube-proxy-vwhfz"
	Oct 25 10:36:21 newest-cni-491554 kubelet[725]: I1025 10:36:21.882626     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b1e90261-b931-4949-be7a-bb6e26597d55-xtables-lock\") pod \"kindnet-p6hkm\" (UID: \"b1e90261-b931-4949-be7a-bb6e26597d55\") " pod="kube-system/kindnet-p6hkm"
	Oct 25 10:36:21 newest-cni-491554 kubelet[725]: I1025 10:36:21.939174     725 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 25 10:36:22 newest-cni-491554 kubelet[725]: W1025 10:36:22.159035     725 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3a1d576c3602867853477f035a9ea60cae7d92cb1d6d6f0519a5f74f9b275216/crio-4698b82d58590f8def5ed8ccdcdf9c5cc79e0634a49311f8ba6d52711314dd49 WatchSource:0}: Error finding container 4698b82d58590f8def5ed8ccdcdf9c5cc79e0634a49311f8ba6d52711314dd49: Status 404 returned error can't find the container with id 4698b82d58590f8def5ed8ccdcdf9c5cc79e0634a49311f8ba6d52711314dd49
	Oct 25 10:36:24 newest-cni-491554 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 10:36:24 newest-cni-491554 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 10:36:24 newest-cni-491554 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-491554 -n newest-cni-491554
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-491554 -n newest-cni-491554: exit status 2 (535.795018ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-491554 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-zxmft storage-provisioner dashboard-metrics-scraper-6ffb444bf9-6fngc kubernetes-dashboard-855c9754f9-4jtcj
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-491554 describe pod coredns-66bc5c9577-zxmft storage-provisioner dashboard-metrics-scraper-6ffb444bf9-6fngc kubernetes-dashboard-855c9754f9-4jtcj
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-491554 describe pod coredns-66bc5c9577-zxmft storage-provisioner dashboard-metrics-scraper-6ffb444bf9-6fngc kubernetes-dashboard-855c9754f9-4jtcj: exit status 1 (133.226492ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-zxmft" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-6fngc" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-4jtcj" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-491554 describe pod coredns-66bc5c9577-zxmft storage-provisioner dashboard-metrics-scraper-6ffb444bf9-6fngc kubernetes-dashboard-855c9754f9-4jtcj: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (9.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-768303 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-768303 --alsologtostderr -v=1: exit status 80 (1.970925733s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-768303 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 10:37:29.219549  508292 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:37:29.219720  508292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:37:29.219750  508292 out.go:374] Setting ErrFile to fd 2...
	I1025 10:37:29.219771  508292 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:37:29.220166  508292 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 10:37:29.220508  508292 out.go:368] Setting JSON to false
	I1025 10:37:29.220569  508292 mustload.go:65] Loading cluster: no-preload-768303
	I1025 10:37:29.221494  508292 config.go:182] Loaded profile config "no-preload-768303": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:37:29.222012  508292 cli_runner.go:164] Run: docker container inspect no-preload-768303 --format={{.State.Status}}
	I1025 10:37:29.244976  508292 host.go:66] Checking if "no-preload-768303" exists ...
	I1025 10:37:29.245309  508292 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:37:29.306456  508292 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-25 10:37:29.297145965 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:37:29.307118  508292 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-768303 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1025 10:37:29.312352  508292 out.go:179] * Pausing node no-preload-768303 ... 
	I1025 10:37:29.315326  508292 host.go:66] Checking if "no-preload-768303" exists ...
	I1025 10:37:29.315649  508292 ssh_runner.go:195] Run: systemctl --version
	I1025 10:37:29.315696  508292 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-768303
	I1025 10:37:29.332422  508292 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33467 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/no-preload-768303/id_rsa Username:docker}
	I1025 10:37:29.433828  508292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:37:29.447988  508292 pause.go:52] kubelet running: true
	I1025 10:37:29.448053  508292 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:37:29.714665  508292 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:37:29.714812  508292 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:37:29.798251  508292 cri.go:89] found id: "62a15f1c7868d04631806759f4487bee1b2c75b4a3a11adc84948d3d78dc6a31"
	I1025 10:37:29.798276  508292 cri.go:89] found id: "44fb97e92f81b6f58a2866e13945a4e276c3468dc6734864d6817b7fb99282a5"
	I1025 10:37:29.798282  508292 cri.go:89] found id: "c8f46af3f17bdb7311a5124e4ee22cdc269f9aca8899d31cda046d5330eb7dd0"
	I1025 10:37:29.798286  508292 cri.go:89] found id: "0492235313c1aebca4f7685caee08085cf57173a0fa4189ed3a0c0b1bb9f3538"
	I1025 10:37:29.798289  508292 cri.go:89] found id: "403792b3f1ed46564bd4347a8a8647977de7599f4e850acc81992dbd9bc4e22b"
	I1025 10:37:29.798295  508292 cri.go:89] found id: "c59a4eacffb6261c64ec4a0dd5309bb830374bf2bc988c69c85ef5c9dce0ad2f"
	I1025 10:37:29.798298  508292 cri.go:89] found id: "c1fa525274c96926fd1852026a7ae2899e382b1b5a53998cb6b5bd410772f848"
	I1025 10:37:29.798301  508292 cri.go:89] found id: "82f4a3c724831362a1c08516c0aef7bd256ae88928ed97fce85546159dfb6d88"
	I1025 10:37:29.798304  508292 cri.go:89] found id: "29ccba364a87269d3610c2a46d268a0cb3c524dfed251a515f0693f9f94692a9"
	I1025 10:37:29.798310  508292 cri.go:89] found id: "768de17eae2727dc2b38c910a646fc5e11509ff6fe18771502473f27c5a37897"
	I1025 10:37:29.798318  508292 cri.go:89] found id: "9732113c248ebd098cdf4f6f6e91edb5873b14fea51851da7264013a9aacb532"
	I1025 10:37:29.798321  508292 cri.go:89] found id: ""
	I1025 10:37:29.798369  508292 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:37:29.817794  508292 retry.go:31] will retry after 332.020744ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:37:29Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:37:30.150342  508292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:37:30.165515  508292 pause.go:52] kubelet running: false
	I1025 10:37:30.165607  508292 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:37:30.339261  508292 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:37:30.339351  508292 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:37:30.410586  508292 cri.go:89] found id: "62a15f1c7868d04631806759f4487bee1b2c75b4a3a11adc84948d3d78dc6a31"
	I1025 10:37:30.410659  508292 cri.go:89] found id: "44fb97e92f81b6f58a2866e13945a4e276c3468dc6734864d6817b7fb99282a5"
	I1025 10:37:30.410680  508292 cri.go:89] found id: "c8f46af3f17bdb7311a5124e4ee22cdc269f9aca8899d31cda046d5330eb7dd0"
	I1025 10:37:30.410702  508292 cri.go:89] found id: "0492235313c1aebca4f7685caee08085cf57173a0fa4189ed3a0c0b1bb9f3538"
	I1025 10:37:30.410739  508292 cri.go:89] found id: "403792b3f1ed46564bd4347a8a8647977de7599f4e850acc81992dbd9bc4e22b"
	I1025 10:37:30.410763  508292 cri.go:89] found id: "c59a4eacffb6261c64ec4a0dd5309bb830374bf2bc988c69c85ef5c9dce0ad2f"
	I1025 10:37:30.410792  508292 cri.go:89] found id: "c1fa525274c96926fd1852026a7ae2899e382b1b5a53998cb6b5bd410772f848"
	I1025 10:37:30.410825  508292 cri.go:89] found id: "82f4a3c724831362a1c08516c0aef7bd256ae88928ed97fce85546159dfb6d88"
	I1025 10:37:30.410848  508292 cri.go:89] found id: "29ccba364a87269d3610c2a46d268a0cb3c524dfed251a515f0693f9f94692a9"
	I1025 10:37:30.410871  508292 cri.go:89] found id: "768de17eae2727dc2b38c910a646fc5e11509ff6fe18771502473f27c5a37897"
	I1025 10:37:30.410905  508292 cri.go:89] found id: "9732113c248ebd098cdf4f6f6e91edb5873b14fea51851da7264013a9aacb532"
	I1025 10:37:30.410929  508292 cri.go:89] found id: ""
	I1025 10:37:30.411016  508292 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:37:30.425863  508292 retry.go:31] will retry after 409.749676ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:37:30Z" level=error msg="open /run/runc: no such file or directory"
	I1025 10:37:30.836378  508292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:37:30.849940  508292 pause.go:52] kubelet running: false
	I1025 10:37:30.850039  508292 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1025 10:37:31.024938  508292 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1025 10:37:31.025065  508292 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1025 10:37:31.099613  508292 cri.go:89] found id: "62a15f1c7868d04631806759f4487bee1b2c75b4a3a11adc84948d3d78dc6a31"
	I1025 10:37:31.099641  508292 cri.go:89] found id: "44fb97e92f81b6f58a2866e13945a4e276c3468dc6734864d6817b7fb99282a5"
	I1025 10:37:31.099647  508292 cri.go:89] found id: "c8f46af3f17bdb7311a5124e4ee22cdc269f9aca8899d31cda046d5330eb7dd0"
	I1025 10:37:31.099652  508292 cri.go:89] found id: "0492235313c1aebca4f7685caee08085cf57173a0fa4189ed3a0c0b1bb9f3538"
	I1025 10:37:31.099655  508292 cri.go:89] found id: "403792b3f1ed46564bd4347a8a8647977de7599f4e850acc81992dbd9bc4e22b"
	I1025 10:37:31.099659  508292 cri.go:89] found id: "c59a4eacffb6261c64ec4a0dd5309bb830374bf2bc988c69c85ef5c9dce0ad2f"
	I1025 10:37:31.099663  508292 cri.go:89] found id: "c1fa525274c96926fd1852026a7ae2899e382b1b5a53998cb6b5bd410772f848"
	I1025 10:37:31.099700  508292 cri.go:89] found id: "82f4a3c724831362a1c08516c0aef7bd256ae88928ed97fce85546159dfb6d88"
	I1025 10:37:31.099711  508292 cri.go:89] found id: "29ccba364a87269d3610c2a46d268a0cb3c524dfed251a515f0693f9f94692a9"
	I1025 10:37:31.099719  508292 cri.go:89] found id: "768de17eae2727dc2b38c910a646fc5e11509ff6fe18771502473f27c5a37897"
	I1025 10:37:31.099722  508292 cri.go:89] found id: "9732113c248ebd098cdf4f6f6e91edb5873b14fea51851da7264013a9aacb532"
	I1025 10:37:31.099725  508292 cri.go:89] found id: ""
	I1025 10:37:31.099796  508292 ssh_runner.go:195] Run: sudo runc list -f json
	I1025 10:37:31.115629  508292 out.go:203] 
	W1025 10:37:31.118715  508292 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:37:31Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T10:37:31Z" level=error msg="open /run/runc: no such file or directory"
	
	W1025 10:37:31.118744  508292 out.go:285] * 
	* 
	W1025 10:37:31.126223  508292 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1025 10:37:31.129280  508292 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-768303 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-768303
helpers_test.go:243: (dbg) docker inspect no-preload-768303:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9b0b6c2f298a351a13b7790ae51ec5d870ae3d00b4490d4a3b094eb081d986c1",
	        "Created": "2025-10-25T10:34:41.024753053Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 501928,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:36:15.540152461Z",
	            "FinishedAt": "2025-10-25T10:36:14.437999407Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/9b0b6c2f298a351a13b7790ae51ec5d870ae3d00b4490d4a3b094eb081d986c1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9b0b6c2f298a351a13b7790ae51ec5d870ae3d00b4490d4a3b094eb081d986c1/hostname",
	        "HostsPath": "/var/lib/docker/containers/9b0b6c2f298a351a13b7790ae51ec5d870ae3d00b4490d4a3b094eb081d986c1/hosts",
	        "LogPath": "/var/lib/docker/containers/9b0b6c2f298a351a13b7790ae51ec5d870ae3d00b4490d4a3b094eb081d986c1/9b0b6c2f298a351a13b7790ae51ec5d870ae3d00b4490d4a3b094eb081d986c1-json.log",
	        "Name": "/no-preload-768303",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-768303:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-768303",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9b0b6c2f298a351a13b7790ae51ec5d870ae3d00b4490d4a3b094eb081d986c1",
	                "LowerDir": "/var/lib/docker/overlay2/731d2c060601c6df8e49fb046de626ce25a10794f565bba2c8a91fcca1bcf499-init/diff:/var/lib/docker/overlay2/56244b4f60319ba406f5ebe406da3f45bd5764c167111669ae2d84794cf94518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/731d2c060601c6df8e49fb046de626ce25a10794f565bba2c8a91fcca1bcf499/merged",
	                "UpperDir": "/var/lib/docker/overlay2/731d2c060601c6df8e49fb046de626ce25a10794f565bba2c8a91fcca1bcf499/diff",
	                "WorkDir": "/var/lib/docker/overlay2/731d2c060601c6df8e49fb046de626ce25a10794f565bba2c8a91fcca1bcf499/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-768303",
	                "Source": "/var/lib/docker/volumes/no-preload-768303/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-768303",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-768303",
	                "name.minikube.sigs.k8s.io": "no-preload-768303",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "291066f16b765922cd121e97b3777d475683376e68ee721e072d2bd070aeac32",
	            "SandboxKey": "/var/run/docker/netns/291066f16b76",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33468"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33471"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33469"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33470"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-768303": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:a4:d1:a7:15:bd",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "859ef893d5c6367e34b4500fcc3b03774bcaafce1067944be65176cec7fd385b",
	                    "EndpointID": "64a0c79f08e2210d0a6daf2aadbd7e90a53b714ad252e6c947e4aea5e37de05f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-768303",
	                        "9b0b6c2f298a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-768303 -n no-preload-768303
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-768303 -n no-preload-768303: exit status 2 (350.838741ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-768303 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-768303 logs -n 25: (1.310621232s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p default-k8s-diff-port-204074 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-204074                                                                                                                                                                                                               │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ delete  │ -p default-k8s-diff-port-204074                                                                                                                                                                                                               │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ delete  │ -p disable-driver-mounts-533631                                                                                                                                                                                                               │ disable-driver-mounts-533631 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ start   │ -p no-preload-768303 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-768303            │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:35 UTC │
	│ image   │ embed-certs-419185 image list --format=json                                                                                                                                                                                                   │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │ 25 Oct 25 10:35 UTC │
	│ pause   │ -p embed-certs-419185 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │                     │
	│ delete  │ -p embed-certs-419185                                                                                                                                                                                                                         │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │ 25 Oct 25 10:35 UTC │
	│ delete  │ -p embed-certs-419185                                                                                                                                                                                                                         │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │ 25 Oct 25 10:35 UTC │
	│ start   │ -p newest-cni-491554 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-491554            │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │ 25 Oct 25 10:36 UTC │
	│ addons  │ enable metrics-server -p no-preload-768303 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-768303            │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │                     │
	│ stop    │ -p no-preload-768303 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-768303            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │ 25 Oct 25 10:36 UTC │
	│ addons  │ enable metrics-server -p newest-cni-491554 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-491554            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │                     │
	│ stop    │ -p newest-cni-491554 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-491554            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │ 25 Oct 25 10:36 UTC │
	│ addons  │ enable dashboard -p newest-cni-491554 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-491554            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │ 25 Oct 25 10:36 UTC │
	│ start   │ -p newest-cni-491554 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-491554            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │ 25 Oct 25 10:36 UTC │
	│ addons  │ enable dashboard -p no-preload-768303 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-768303            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │ 25 Oct 25 10:36 UTC │
	│ start   │ -p no-preload-768303 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-768303            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │ 25 Oct 25 10:37 UTC │
	│ image   │ newest-cni-491554 image list --format=json                                                                                                                                                                                                    │ newest-cni-491554            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │ 25 Oct 25 10:36 UTC │
	│ pause   │ -p newest-cni-491554 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-491554            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │                     │
	│ delete  │ -p newest-cni-491554                                                                                                                                                                                                                          │ newest-cni-491554            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │ 25 Oct 25 10:36 UTC │
	│ delete  │ -p newest-cni-491554                                                                                                                                                                                                                          │ newest-cni-491554            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │ 25 Oct 25 10:36 UTC │
	│ start   │ -p auto-821614 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-821614                  │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │                     │
	│ image   │ no-preload-768303 image list --format=json                                                                                                                                                                                                    │ no-preload-768303            │ jenkins │ v1.37.0 │ 25 Oct 25 10:37 UTC │ 25 Oct 25 10:37 UTC │
	│ pause   │ -p no-preload-768303 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-768303            │ jenkins │ v1.37.0 │ 25 Oct 25 10:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:36:36
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:36:36.135575  505342 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:36:36.135733  505342 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:36:36.135759  505342 out.go:374] Setting ErrFile to fd 2...
	I1025 10:36:36.135767  505342 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:36:36.136138  505342 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 10:36:36.136606  505342 out.go:368] Setting JSON to false
	I1025 10:36:36.137603  505342 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8346,"bootTime":1761380250,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1025 10:36:36.137671  505342 start.go:141] virtualization:  
	I1025 10:36:36.142119  505342 out.go:179] * [auto-821614] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:36:36.145590  505342 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 10:36:36.145653  505342 notify.go:220] Checking for updates...
	I1025 10:36:36.154497  505342 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:36:36.157628  505342 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:36:36.161334  505342 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-292167/.minikube
	I1025 10:36:36.164389  505342 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:36:36.167429  505342 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:36:36.170865  505342 config.go:182] Loaded profile config "no-preload-768303": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:36:36.171045  505342 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:36:36.200561  505342 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:36:36.200685  505342 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:36:36.277513  505342 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-25 10:36:36.266304223 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:36:36.277620  505342 docker.go:318] overlay module found
	I1025 10:36:36.280929  505342 out.go:179] * Using the docker driver based on user configuration
	I1025 10:36:36.283729  505342 start.go:305] selected driver: docker
	I1025 10:36:36.283750  505342 start.go:925] validating driver "docker" against <nil>
	I1025 10:36:36.283765  505342 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:36:36.284472  505342 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:36:36.344722  505342 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-25 10:36:36.335982345 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:36:36.344879  505342 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 10:36:36.345112  505342 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:36:36.348019  505342 out.go:179] * Using Docker driver with root privileges
	I1025 10:36:36.350865  505342 cni.go:84] Creating CNI manager for ""
	I1025 10:36:36.350935  505342 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:36:36.350950  505342 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 10:36:36.351036  505342 start.go:349] cluster config:
	{Name:auto-821614 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-821614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1025 10:36:36.354165  505342 out.go:179] * Starting "auto-821614" primary control-plane node in "auto-821614" cluster
	I1025 10:36:36.356964  505342 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:36:36.359885  505342 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:36:36.362708  505342 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:36:36.362757  505342 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 10:36:36.362769  505342 cache.go:58] Caching tarball of preloaded images
	I1025 10:36:36.362795  505342 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:36:36.362873  505342 preload.go:233] Found /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:36:36.362884  505342 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:36:36.362993  505342 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/config.json ...
	I1025 10:36:36.363018  505342 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/config.json: {Name:mk288058acd38774af24281c2331edd55139cb6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:36:36.384376  505342 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:36:36.384400  505342 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:36:36.384419  505342 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:36:36.384447  505342 start.go:360] acquireMachinesLock for auto-821614: {Name:mkde8fde4ed6117cd610f36937ca0e2ebed9ded6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:36:36.384595  505342 start.go:364] duration metric: took 130.455µs to acquireMachinesLock for "auto-821614"
	I1025 10:36:36.384626  505342 start.go:93] Provisioning new machine with config: &{Name:auto-821614 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-821614 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:36:36.384702  505342 start.go:125] createHost starting for "" (driver="docker")
	I1025 10:36:35.151969  501769 system_pods.go:86] 8 kube-system pods found
	I1025 10:36:35.152003  501769 system_pods.go:89] "coredns-66bc5c9577-xpwdq" [0aaecbc4-29e5-45e1-ad80-b2465476ab96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:36:35.152014  501769 system_pods.go:89] "etcd-no-preload-768303" [dc4f2360-5fc6-4c24-838f-e552e5061d50] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:36:35.152020  501769 system_pods.go:89] "kindnet-gkbg7" [2844e492-0201-4963-9c6c-74f19df0adea] Running
	I1025 10:36:35.152027  501769 system_pods.go:89] "kube-apiserver-no-preload-768303" [1889e503-da89-437e-b901-e173c80ee724] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:36:35.152033  501769 system_pods.go:89] "kube-controller-manager-no-preload-768303" [cc11bec4-5fcb-419f-a007-3023c54e5bd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:36:35.152038  501769 system_pods.go:89] "kube-proxy-m9bnn" [d1ef05c2-0d0d-43f8-9bb8-f77839881a24] Running
	I1025 10:36:35.152049  501769 system_pods.go:89] "kube-scheduler-no-preload-768303" [bb7736c6-4301-4202-b122-d1fb345fb94a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:36:35.152054  501769 system_pods.go:89] "storage-provisioner" [89da7f26-c2be-43b2-817c-6c2621a97a30] Running
	I1025 10:36:35.152061  501769 system_pods.go:126] duration metric: took 93.937402ms to wait for k8s-apps to be running ...
	I1025 10:36:35.152069  501769 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:36:35.152130  501769 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:36:35.233298  501769 system_svc.go:56] duration metric: took 81.217653ms WaitForService to wait for kubelet
	I1025 10:36:35.233323  501769 kubeadm.go:586] duration metric: took 9.535378047s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:36:35.233343  501769 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:36:35.246621  501769 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:36:35.246651  501769 node_conditions.go:123] node cpu capacity is 2
	I1025 10:36:35.246663  501769 node_conditions.go:105] duration metric: took 13.314622ms to run NodePressure ...
	I1025 10:36:35.246675  501769 start.go:241] waiting for startup goroutines ...
	I1025 10:36:35.246683  501769 start.go:246] waiting for cluster config update ...
	I1025 10:36:35.246693  501769 start.go:255] writing updated cluster config ...
	I1025 10:36:35.246949  501769 ssh_runner.go:195] Run: rm -f paused
	I1025 10:36:35.251271  501769 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:36:35.259782  501769 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xpwdq" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 10:36:37.266355  501769 pod_ready.go:104] pod "coredns-66bc5c9577-xpwdq" is not "Ready", error: <nil>
	W1025 10:36:39.266742  501769 pod_ready.go:104] pod "coredns-66bc5c9577-xpwdq" is not "Ready", error: <nil>
	I1025 10:36:36.388128  505342 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 10:36:36.388358  505342 start.go:159] libmachine.API.Create for "auto-821614" (driver="docker")
	I1025 10:36:36.388407  505342 client.go:168] LocalClient.Create starting
	I1025 10:36:36.388482  505342 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem
	I1025 10:36:36.388520  505342 main.go:141] libmachine: Decoding PEM data...
	I1025 10:36:36.388555  505342 main.go:141] libmachine: Parsing certificate...
	I1025 10:36:36.388616  505342 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem
	I1025 10:36:36.388650  505342 main.go:141] libmachine: Decoding PEM data...
	I1025 10:36:36.388665  505342 main.go:141] libmachine: Parsing certificate...
	I1025 10:36:36.389061  505342 cli_runner.go:164] Run: docker network inspect auto-821614 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 10:36:36.409944  505342 cli_runner.go:211] docker network inspect auto-821614 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 10:36:36.410026  505342 network_create.go:284] running [docker network inspect auto-821614] to gather additional debugging logs...
	I1025 10:36:36.410053  505342 cli_runner.go:164] Run: docker network inspect auto-821614
	W1025 10:36:36.430843  505342 cli_runner.go:211] docker network inspect auto-821614 returned with exit code 1
	I1025 10:36:36.430872  505342 network_create.go:287] error running [docker network inspect auto-821614]: docker network inspect auto-821614: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-821614 not found
	I1025 10:36:36.430887  505342 network_create.go:289] output of [docker network inspect auto-821614]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-821614 not found
	
	** /stderr **
	I1025 10:36:36.430977  505342 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:36:36.451344  505342 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-101b69e1e09b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d2:49:dd:18:ab:21} reservation:<nil>}
	I1025 10:36:36.451607  505342 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fee160f0176f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:62:3b:42:f8:f5:b2} reservation:<nil>}
	I1025 10:36:36.452002  505342 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5368f13f34ad IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2e:30:83:3d:80:30} reservation:<nil>}
	I1025 10:36:36.452474  505342 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019f2aa0}
	I1025 10:36:36.452503  505342 network_create.go:124] attempt to create docker network auto-821614 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 10:36:36.452580  505342 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-821614 auto-821614
	I1025 10:36:36.517464  505342 network_create.go:108] docker network auto-821614 192.168.76.0/24 created
	I1025 10:36:36.517498  505342 kic.go:121] calculated static IP "192.168.76.2" for the "auto-821614" container
	I1025 10:36:36.517571  505342 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 10:36:36.536737  505342 cli_runner.go:164] Run: docker volume create auto-821614 --label name.minikube.sigs.k8s.io=auto-821614 --label created_by.minikube.sigs.k8s.io=true
	I1025 10:36:36.553993  505342 oci.go:103] Successfully created a docker volume auto-821614
	I1025 10:36:36.554096  505342 cli_runner.go:164] Run: docker run --rm --name auto-821614-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-821614 --entrypoint /usr/bin/test -v auto-821614:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 10:36:37.192202  505342 oci.go:107] Successfully prepared a docker volume auto-821614
	I1025 10:36:37.192256  505342 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:36:37.192276  505342 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 10:36:37.192344  505342 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-821614:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1025 10:36:41.767939  501769 pod_ready.go:104] pod "coredns-66bc5c9577-xpwdq" is not "Ready", error: <nil>
	W1025 10:36:43.796269  501769 pod_ready.go:104] pod "coredns-66bc5c9577-xpwdq" is not "Ready", error: <nil>
	I1025 10:36:42.692814  505342 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-821614:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.500430282s)
	I1025 10:36:42.692841  505342 kic.go:203] duration metric: took 5.500561797s to extract preloaded images to volume ...
	W1025 10:36:42.692974  505342 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1025 10:36:42.693076  505342 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 10:36:42.773055  505342 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-821614 --name auto-821614 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-821614 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-821614 --network auto-821614 --ip 192.168.76.2 --volume auto-821614:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 10:36:43.267634  505342 cli_runner.go:164] Run: docker container inspect auto-821614 --format={{.State.Running}}
	I1025 10:36:43.289144  505342 cli_runner.go:164] Run: docker container inspect auto-821614 --format={{.State.Status}}
	I1025 10:36:43.317858  505342 cli_runner.go:164] Run: docker exec auto-821614 stat /var/lib/dpkg/alternatives/iptables
	I1025 10:36:43.384132  505342 oci.go:144] the created container "auto-821614" has a running status.
	I1025 10:36:43.384166  505342 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21794-292167/.minikube/machines/auto-821614/id_rsa...
	I1025 10:36:44.271194  505342 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21794-292167/.minikube/machines/auto-821614/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 10:36:44.291567  505342 cli_runner.go:164] Run: docker container inspect auto-821614 --format={{.State.Status}}
	I1025 10:36:44.313919  505342 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 10:36:44.313942  505342 kic_runner.go:114] Args: [docker exec --privileged auto-821614 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 10:36:44.358750  505342 cli_runner.go:164] Run: docker container inspect auto-821614 --format={{.State.Status}}
	I1025 10:36:44.376634  505342 machine.go:93] provisionDockerMachine start ...
	I1025 10:36:44.376734  505342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-821614
	I1025 10:36:44.393789  505342 main.go:141] libmachine: Using SSH client type: native
	I1025 10:36:44.394133  505342 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33472 <nil> <nil>}
	I1025 10:36:44.394151  505342 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:36:44.394824  505342 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	W1025 10:36:46.265071  501769 pod_ready.go:104] pod "coredns-66bc5c9577-xpwdq" is not "Ready", error: <nil>
	W1025 10:36:48.267669  501769 pod_ready.go:104] pod "coredns-66bc5c9577-xpwdq" is not "Ready", error: <nil>
	I1025 10:36:47.551177  505342 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-821614
	
	I1025 10:36:47.551201  505342 ubuntu.go:182] provisioning hostname "auto-821614"
	I1025 10:36:47.551262  505342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-821614
	I1025 10:36:47.570622  505342 main.go:141] libmachine: Using SSH client type: native
	I1025 10:36:47.570923  505342 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33472 <nil> <nil>}
	I1025 10:36:47.570941  505342 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-821614 && echo "auto-821614" | sudo tee /etc/hostname
	I1025 10:36:47.748090  505342 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-821614
	
	I1025 10:36:47.748168  505342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-821614
	I1025 10:36:47.770523  505342 main.go:141] libmachine: Using SSH client type: native
	I1025 10:36:47.770840  505342 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33472 <nil> <nil>}
	I1025 10:36:47.770859  505342 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-821614' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-821614/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-821614' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:36:47.936885  505342 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:36:47.936972  505342 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-292167/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-292167/.minikube}
	I1025 10:36:47.937032  505342 ubuntu.go:190] setting up certificates
	I1025 10:36:47.937063  505342 provision.go:84] configureAuth start
	I1025 10:36:47.937159  505342 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-821614
	I1025 10:36:47.962276  505342 provision.go:143] copyHostCerts
	I1025 10:36:47.962334  505342 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem, removing ...
	I1025 10:36:47.962343  505342 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem
	I1025 10:36:47.962409  505342 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem (1082 bytes)
	I1025 10:36:47.962496  505342 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem, removing ...
	I1025 10:36:47.962501  505342 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem
	I1025 10:36:47.962528  505342 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem (1123 bytes)
	I1025 10:36:47.962587  505342 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem, removing ...
	I1025 10:36:47.962591  505342 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem
	I1025 10:36:47.962615  505342 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem (1675 bytes)
	I1025 10:36:47.962663  505342 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem org=jenkins.auto-821614 san=[127.0.0.1 192.168.76.2 auto-821614 localhost minikube]
	I1025 10:36:48.048165  505342 provision.go:177] copyRemoteCerts
	I1025 10:36:48.048276  505342 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:36:48.048350  505342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-821614
	I1025 10:36:48.067398  505342 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33472 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/auto-821614/id_rsa Username:docker}
	I1025 10:36:48.188532  505342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 10:36:48.222538  505342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1025 10:36:48.248790  505342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:36:48.281212  505342 provision.go:87] duration metric: took 344.114244ms to configureAuth
	I1025 10:36:48.281289  505342 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:36:48.281504  505342 config.go:182] Loaded profile config "auto-821614": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:36:48.281651  505342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-821614
	I1025 10:36:48.302767  505342 main.go:141] libmachine: Using SSH client type: native
	I1025 10:36:48.303066  505342 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33472 <nil> <nil>}
	I1025 10:36:48.303088  505342 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:36:48.679395  505342 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:36:48.679464  505342 machine.go:96] duration metric: took 4.302806151s to provisionDockerMachine
	I1025 10:36:48.679545  505342 client.go:171] duration metric: took 12.291125842s to LocalClient.Create
	I1025 10:36:48.679581  505342 start.go:167] duration metric: took 12.291223731s to libmachine.API.Create "auto-821614"
	I1025 10:36:48.679606  505342 start.go:293] postStartSetup for "auto-821614" (driver="docker")
	I1025 10:36:48.679647  505342 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:36:48.679752  505342 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:36:48.679823  505342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-821614
	I1025 10:36:48.712328  505342 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33472 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/auto-821614/id_rsa Username:docker}
	I1025 10:36:48.832032  505342 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:36:48.837415  505342 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:36:48.837467  505342 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:36:48.837483  505342 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/addons for local assets ...
	I1025 10:36:48.837541  505342 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/files for local assets ...
	I1025 10:36:48.837622  505342 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem -> 2940172.pem in /etc/ssl/certs
	I1025 10:36:48.837732  505342 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:36:48.846245  505342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 10:36:48.867220  505342 start.go:296] duration metric: took 187.569071ms for postStartSetup
	I1025 10:36:48.867568  505342 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-821614
	I1025 10:36:48.888365  505342 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/config.json ...
	I1025 10:36:48.888643  505342 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:36:48.888693  505342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-821614
	I1025 10:36:48.912295  505342 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33472 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/auto-821614/id_rsa Username:docker}
	I1025 10:36:49.022582  505342 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:36:49.027709  505342 start.go:128] duration metric: took 12.642992569s to createHost
	I1025 10:36:49.027736  505342 start.go:83] releasing machines lock for "auto-821614", held for 12.643126406s
	I1025 10:36:49.027808  505342 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-821614
	I1025 10:36:49.054506  505342 ssh_runner.go:195] Run: cat /version.json
	I1025 10:36:49.054563  505342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-821614
	I1025 10:36:49.054803  505342 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:36:49.054855  505342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-821614
	I1025 10:36:49.089653  505342 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33472 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/auto-821614/id_rsa Username:docker}
	I1025 10:36:49.098380  505342 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33472 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/auto-821614/id_rsa Username:docker}
	I1025 10:36:49.215804  505342 ssh_runner.go:195] Run: systemctl --version
	I1025 10:36:49.343721  505342 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:36:49.407664  505342 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:36:49.413454  505342 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:36:49.413532  505342 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:36:49.464242  505342 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1025 10:36:49.464283  505342 start.go:495] detecting cgroup driver to use...
	I1025 10:36:49.464322  505342 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:36:49.464387  505342 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:36:49.502561  505342 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:36:49.517335  505342 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:36:49.517420  505342 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:36:49.537985  505342 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:36:49.557092  505342 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:36:49.706713  505342 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:36:49.905689  505342 docker.go:234] disabling docker service ...
	I1025 10:36:49.905827  505342 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:36:49.935868  505342 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:36:49.958225  505342 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:36:50.105146  505342 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:36:50.300074  505342 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:36:50.320435  505342 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:36:50.341247  505342 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:36:50.341387  505342 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:36:50.354300  505342 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:36:50.354462  505342 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:36:50.366673  505342 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:36:50.378583  505342 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:36:50.390598  505342 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:36:50.402837  505342 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:36:50.415264  505342 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:36:50.436096  505342 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:36:50.449687  505342 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:36:50.461394  505342 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:36:50.472485  505342 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:36:50.608427  505342 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:36:50.875976  505342 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:36:50.876060  505342 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:36:50.880441  505342 start.go:563] Will wait 60s for crictl version
	I1025 10:36:50.880552  505342 ssh_runner.go:195] Run: which crictl
	I1025 10:36:50.884431  505342 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:36:50.913645  505342 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:36:50.913754  505342 ssh_runner.go:195] Run: crio --version
	I1025 10:36:50.948405  505342 ssh_runner.go:195] Run: crio --version
	I1025 10:36:50.981757  505342 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:36:50.984813  505342 cli_runner.go:164] Run: docker network inspect auto-821614 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:36:51.006635  505342 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1025 10:36:51.011088  505342 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:36:51.023089  505342 kubeadm.go:883] updating cluster {Name:auto-821614 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-821614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:36:51.023236  505342 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:36:51.023291  505342 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:36:51.057107  505342 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:36:51.057131  505342 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:36:51.057201  505342 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:36:51.088626  505342 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:36:51.088651  505342 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:36:51.088659  505342 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1025 10:36:51.088769  505342 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-821614 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-821614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:36:51.088858  505342 ssh_runner.go:195] Run: crio config
	I1025 10:36:51.149441  505342 cni.go:84] Creating CNI manager for ""
	I1025 10:36:51.149462  505342 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:36:51.149476  505342 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:36:51.149501  505342 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-821614 NodeName:auto-821614 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:36:51.149633  505342 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-821614"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:36:51.149705  505342 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:36:51.159340  505342 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:36:51.159423  505342 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:36:51.168095  505342 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1025 10:36:51.181468  505342 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:36:51.194626  505342 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1025 10:36:51.207674  505342 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:36:51.211146  505342 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:36:51.221111  505342 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:36:51.337465  505342 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:36:51.353266  505342 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614 for IP: 192.168.76.2
	I1025 10:36:51.353338  505342 certs.go:195] generating shared ca certs ...
	I1025 10:36:51.353377  505342 certs.go:227] acquiring lock for ca certs: {Name:mk9438893ca511bbb3aa6154ee7b6f94b409696d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:36:51.353572  505342 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key
	I1025 10:36:51.353673  505342 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key
	I1025 10:36:51.353710  505342 certs.go:257] generating profile certs ...
	I1025 10:36:51.353797  505342 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/client.key
	I1025 10:36:51.353843  505342 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/client.crt with IP's: []
	I1025 10:36:52.249587  505342 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/client.crt ...
	I1025 10:36:52.249620  505342 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/client.crt: {Name:mk68c5657cec1ee108d29f35b574472597d3841a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:36:52.249812  505342 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/client.key ...
	I1025 10:36:52.249826  505342 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/client.key: {Name:mk4db70f76cc1d2d3527c36e08dc1099e2263815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:36:52.249925  505342 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/apiserver.key.72bfe253
	I1025 10:36:52.249943  505342 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/apiserver.crt.72bfe253 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1025 10:36:52.393728  505342 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/apiserver.crt.72bfe253 ...
	I1025 10:36:52.393761  505342 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/apiserver.crt.72bfe253: {Name:mkc6133a807a3da2ea2fc5216272f79a38a7963d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:36:52.393947  505342 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/apiserver.key.72bfe253 ...
	I1025 10:36:52.393961  505342 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/apiserver.key.72bfe253: {Name:mka41dc493d8abb5f80cfdbb2013cbeba4c1c809 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:36:52.394050  505342 certs.go:382] copying /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/apiserver.crt.72bfe253 -> /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/apiserver.crt
	I1025 10:36:52.394146  505342 certs.go:386] copying /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/apiserver.key.72bfe253 -> /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/apiserver.key
	I1025 10:36:52.394207  505342 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/proxy-client.key
	I1025 10:36:52.394225  505342 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/proxy-client.crt with IP's: []
	I1025 10:36:52.507700  505342 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/proxy-client.crt ...
	I1025 10:36:52.507726  505342 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/proxy-client.crt: {Name:mk3924dd4860a4fd48bd155b7b678c9501c3cb10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:36:52.507899  505342 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/proxy-client.key ...
	I1025 10:36:52.507912  505342 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/proxy-client.key: {Name:mkccacbfe9de1053b9a7312651ffe4e577d2fb7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:36:52.508108  505342 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem (1338 bytes)
	W1025 10:36:52.508149  505342 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017_empty.pem, impossibly tiny 0 bytes
	I1025 10:36:52.508165  505342 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 10:36:52.508191  505342 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem (1082 bytes)
	I1025 10:36:52.508219  505342 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:36:52.508245  505342 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem (1675 bytes)
	I1025 10:36:52.508288  505342 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 10:36:52.508991  505342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:36:52.528658  505342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:36:52.547758  505342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:36:52.566107  505342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:36:52.584094  505342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1025 10:36:52.602602  505342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:36:52.620722  505342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:36:52.638155  505342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 10:36:52.655903  505342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem --> /usr/share/ca-certificates/294017.pem (1338 bytes)
	I1025 10:36:52.673927  505342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /usr/share/ca-certificates/2940172.pem (1708 bytes)
	I1025 10:36:52.691039  505342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:36:52.708494  505342 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:36:52.721521  505342 ssh_runner.go:195] Run: openssl version
	I1025 10:36:52.729636  505342 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940172.pem && ln -fs /usr/share/ca-certificates/2940172.pem /etc/ssl/certs/2940172.pem"
	I1025 10:36:52.738537  505342 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940172.pem
	I1025 10:36:52.742275  505342 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:39 /usr/share/ca-certificates/2940172.pem
	I1025 10:36:52.742338  505342 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940172.pem
	I1025 10:36:52.790421  505342 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940172.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:36:52.801286  505342 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:36:52.810465  505342 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:36:52.814691  505342 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:36:52.814806  505342 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:36:52.859400  505342 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:36:52.867726  505342 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294017.pem && ln -fs /usr/share/ca-certificates/294017.pem /etc/ssl/certs/294017.pem"
	I1025 10:36:52.875644  505342 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294017.pem
	I1025 10:36:52.879566  505342 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:39 /usr/share/ca-certificates/294017.pem
	I1025 10:36:52.879628  505342 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294017.pem
	I1025 10:36:52.921166  505342 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294017.pem /etc/ssl/certs/51391683.0"
	I1025 10:36:52.929216  505342 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:36:52.932614  505342 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 10:36:52.932690  505342 kubeadm.go:400] StartCluster: {Name:auto-821614 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-821614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:36:52.932780  505342 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:36:52.932844  505342 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:36:52.964764  505342 cri.go:89] found id: ""
	I1025 10:36:52.964895  505342 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:36:52.972681  505342 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 10:36:52.980393  505342 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 10:36:52.980486  505342 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 10:36:52.988044  505342 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 10:36:52.988066  505342 kubeadm.go:157] found existing configuration files:
	
	I1025 10:36:52.988117  505342 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 10:36:52.995780  505342 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 10:36:52.995841  505342 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 10:36:53.004366  505342 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 10:36:53.013084  505342 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 10:36:53.013211  505342 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 10:36:53.021950  505342 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 10:36:53.030960  505342 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 10:36:53.031072  505342 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 10:36:53.038540  505342 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 10:36:53.046442  505342 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 10:36:53.046509  505342 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 10:36:53.053656  505342 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 10:36:53.094503  505342 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 10:36:53.094792  505342 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 10:36:53.121798  505342 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 10:36:53.121877  505342 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1025 10:36:53.121918  505342 kubeadm.go:318] OS: Linux
	I1025 10:36:53.121970  505342 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 10:36:53.122025  505342 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1025 10:36:53.122077  505342 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 10:36:53.122136  505342 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 10:36:53.122191  505342 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 10:36:53.122245  505342 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 10:36:53.122297  505342 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 10:36:53.122350  505342 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 10:36:53.122402  505342 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1025 10:36:53.196120  505342 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 10:36:53.196745  505342 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 10:36:53.196876  505342 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 10:36:53.203689  505342 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1025 10:36:50.273370  501769 pod_ready.go:104] pod "coredns-66bc5c9577-xpwdq" is not "Ready", error: <nil>
	W1025 10:36:52.765687  501769 pod_ready.go:104] pod "coredns-66bc5c9577-xpwdq" is not "Ready", error: <nil>
	W1025 10:36:54.767219  501769 pod_ready.go:104] pod "coredns-66bc5c9577-xpwdq" is not "Ready", error: <nil>
	I1025 10:36:53.208694  505342 out.go:252]   - Generating certificates and keys ...
	I1025 10:36:53.208856  505342 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 10:36:53.208962  505342 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 10:36:53.435968  505342 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 10:36:54.090061  505342 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 10:36:54.662434  505342 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 10:36:55.022645  505342 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 10:36:55.788707  505342 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 10:36:55.789047  505342 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-821614 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 10:36:56.009144  505342 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 10:36:56.009545  505342 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-821614 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	W1025 10:36:57.267678  501769 pod_ready.go:104] pod "coredns-66bc5c9577-xpwdq" is not "Ready", error: <nil>
	W1025 10:36:59.776871  501769 pod_ready.go:104] pod "coredns-66bc5c9577-xpwdq" is not "Ready", error: <nil>
	I1025 10:36:56.206455  505342 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 10:36:57.785951  505342 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 10:36:58.565167  505342 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 10:36:58.565516  505342 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 10:36:58.840062  505342 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 10:36:59.135397  505342 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 10:36:59.474095  505342 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 10:37:00.505163  505342 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 10:37:01.179011  505342 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 10:37:01.179616  505342 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 10:37:01.183877  505342 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1025 10:37:02.266487  501769 pod_ready.go:104] pod "coredns-66bc5c9577-xpwdq" is not "Ready", error: <nil>
	W1025 10:37:04.768069  501769 pod_ready.go:104] pod "coredns-66bc5c9577-xpwdq" is not "Ready", error: <nil>
	I1025 10:37:01.187321  505342 out.go:252]   - Booting up control plane ...
	I1025 10:37:01.187445  505342 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 10:37:01.187564  505342 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 10:37:01.189775  505342 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 10:37:01.207662  505342 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 10:37:01.208377  505342 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 10:37:01.217114  505342 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 10:37:01.217449  505342 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 10:37:01.217655  505342 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 10:37:01.366068  505342 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 10:37:01.366196  505342 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 10:37:03.367955  505342 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.001089851s
	I1025 10:37:03.373219  505342 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 10:37:03.373335  505342 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1025 10:37:03.373633  505342 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 10:37:03.373741  505342 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1025 10:37:07.266525  501769 pod_ready.go:104] pod "coredns-66bc5c9577-xpwdq" is not "Ready", error: <nil>
	W1025 10:37:09.766690  501769 pod_ready.go:104] pod "coredns-66bc5c9577-xpwdq" is not "Ready", error: <nil>
	I1025 10:37:06.292001  505342 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.918117751s
	I1025 10:37:08.989188  505342 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.615951352s
	I1025 10:37:10.874851  505342 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.50129031s
	I1025 10:37:10.896813  505342 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 10:37:10.916381  505342 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 10:37:10.931443  505342 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 10:37:10.931661  505342 kubeadm.go:318] [mark-control-plane] Marking the node auto-821614 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 10:37:10.951711  505342 kubeadm.go:318] [bootstrap-token] Using token: t2dl2l.8q4tpfrh2bqn7jsk
	I1025 10:37:10.954756  505342 out.go:252]   - Configuring RBAC rules ...
	I1025 10:37:10.954915  505342 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 10:37:10.961161  505342 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 10:37:10.975989  505342 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 10:37:10.982323  505342 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 10:37:10.987204  505342 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 10:37:10.991658  505342 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 10:37:11.285124  505342 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 10:37:11.720539  505342 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 10:37:12.286334  505342 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 10:37:12.287689  505342 kubeadm.go:318] 
	I1025 10:37:12.287764  505342 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 10:37:12.287770  505342 kubeadm.go:318] 
	I1025 10:37:12.287851  505342 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 10:37:12.287855  505342 kubeadm.go:318] 
	I1025 10:37:12.287882  505342 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 10:37:12.287944  505342 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 10:37:12.287996  505342 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 10:37:12.288001  505342 kubeadm.go:318] 
	I1025 10:37:12.288058  505342 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 10:37:12.288063  505342 kubeadm.go:318] 
	I1025 10:37:12.288115  505342 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 10:37:12.288119  505342 kubeadm.go:318] 
	I1025 10:37:12.288174  505342 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 10:37:12.288252  505342 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 10:37:12.288323  505342 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 10:37:12.288328  505342 kubeadm.go:318] 
	I1025 10:37:12.288416  505342 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 10:37:12.288502  505342 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 10:37:12.288507  505342 kubeadm.go:318] 
	I1025 10:37:12.288595  505342 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token t2dl2l.8q4tpfrh2bqn7jsk \
	I1025 10:37:12.288725  505342 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3fbde57ac93e639221cdfe118779e4c6254e9915ac4b07aafbe95a9f86bce8d0 \
	I1025 10:37:12.288749  505342 kubeadm.go:318] 	--control-plane 
	I1025 10:37:12.288755  505342 kubeadm.go:318] 
	I1025 10:37:12.288844  505342 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 10:37:12.288849  505342 kubeadm.go:318] 
	I1025 10:37:12.288936  505342 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token t2dl2l.8q4tpfrh2bqn7jsk \
	I1025 10:37:12.289042  505342 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3fbde57ac93e639221cdfe118779e4c6254e9915ac4b07aafbe95a9f86bce8d0 
	I1025 10:37:12.293769  505342 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1025 10:37:12.293995  505342 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1025 10:37:12.294101  505342 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 10:37:12.294123  505342 cni.go:84] Creating CNI manager for ""
	I1025 10:37:12.294130  505342 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:37:12.297336  505342 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1025 10:37:12.265351  501769 pod_ready.go:104] pod "coredns-66bc5c9577-xpwdq" is not "Ready", error: <nil>
	W1025 10:37:14.767095  501769 pod_ready.go:104] pod "coredns-66bc5c9577-xpwdq" is not "Ready", error: <nil>
	I1025 10:37:12.300350  505342 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 10:37:12.305127  505342 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 10:37:12.305150  505342 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 10:37:12.319085  505342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 10:37:13.070341  505342 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 10:37:13.070426  505342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:37:13.070483  505342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-821614 minikube.k8s.io/updated_at=2025_10_25T10_37_13_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53 minikube.k8s.io/name=auto-821614 minikube.k8s.io/primary=true
	I1025 10:37:13.235592  505342 ops.go:34] apiserver oom_adj: -16
	I1025 10:37:13.235601  505342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:37:13.735767  505342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:37:14.235919  505342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:37:14.735673  505342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:37:15.235643  505342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:37:15.736402  505342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:37:16.236024  505342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:37:16.735706  505342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:37:16.885807  505342 kubeadm.go:1113] duration metric: took 3.815439627s to wait for elevateKubeSystemPrivileges
	I1025 10:37:16.885834  505342 kubeadm.go:402] duration metric: took 23.953147674s to StartCluster
	I1025 10:37:16.885850  505342 settings.go:142] acquiring lock: {Name:mkea67a33832f2ea491a4c745ccd836174c61655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:37:16.885910  505342 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:37:16.886906  505342 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/kubeconfig: {Name:mk3dba620aff9c31a3afd43d9db4d9ff5be75367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:37:16.887092  505342 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:37:16.887383  505342 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 10:37:16.887622  505342 config.go:182] Loaded profile config "auto-821614": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:37:16.887658  505342 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:37:16.887716  505342 addons.go:69] Setting storage-provisioner=true in profile "auto-821614"
	I1025 10:37:16.887730  505342 addons.go:238] Setting addon storage-provisioner=true in "auto-821614"
	I1025 10:37:16.887751  505342 host.go:66] Checking if "auto-821614" exists ...
	I1025 10:37:16.888051  505342 addons.go:69] Setting default-storageclass=true in profile "auto-821614"
	I1025 10:37:16.888066  505342 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-821614"
	I1025 10:37:16.888360  505342 cli_runner.go:164] Run: docker container inspect auto-821614 --format={{.State.Status}}
	I1025 10:37:16.888841  505342 cli_runner.go:164] Run: docker container inspect auto-821614 --format={{.State.Status}}
	I1025 10:37:16.893097  505342 out.go:179] * Verifying Kubernetes components...
	I1025 10:37:16.895970  505342 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:37:16.928511  505342 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:37:15.780567  501769 pod_ready.go:94] pod "coredns-66bc5c9577-xpwdq" is "Ready"
	I1025 10:37:15.780605  501769 pod_ready.go:86] duration metric: took 40.520798507s for pod "coredns-66bc5c9577-xpwdq" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:37:15.794474  501769 pod_ready.go:83] waiting for pod "etcd-no-preload-768303" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:37:15.803412  501769 pod_ready.go:94] pod "etcd-no-preload-768303" is "Ready"
	I1025 10:37:15.803438  501769 pod_ready.go:86] duration metric: took 8.938691ms for pod "etcd-no-preload-768303" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:37:15.894941  501769 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-768303" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:37:15.900376  501769 pod_ready.go:94] pod "kube-apiserver-no-preload-768303" is "Ready"
	I1025 10:37:15.900404  501769 pod_ready.go:86] duration metric: took 5.430337ms for pod "kube-apiserver-no-preload-768303" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:37:15.904022  501769 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-768303" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:37:15.963989  501769 pod_ready.go:94] pod "kube-controller-manager-no-preload-768303" is "Ready"
	I1025 10:37:15.964017  501769 pod_ready.go:86] duration metric: took 59.960532ms for pod "kube-controller-manager-no-preload-768303" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:37:16.164262  501769 pod_ready.go:83] waiting for pod "kube-proxy-m9bnn" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:37:16.563482  501769 pod_ready.go:94] pod "kube-proxy-m9bnn" is "Ready"
	I1025 10:37:16.563510  501769 pod_ready.go:86] duration metric: took 399.218681ms for pod "kube-proxy-m9bnn" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:37:16.763785  501769 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-768303" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:37:17.163454  501769 pod_ready.go:94] pod "kube-scheduler-no-preload-768303" is "Ready"
	I1025 10:37:17.163485  501769 pod_ready.go:86] duration metric: took 399.668582ms for pod "kube-scheduler-no-preload-768303" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:37:17.163498  501769 pod_ready.go:40] duration metric: took 41.912196728s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:37:17.274423  501769 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 10:37:17.277838  501769 out.go:179] * Done! kubectl is now configured to use "no-preload-768303" cluster and "default" namespace by default
	I1025 10:37:16.931427  505342 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:37:16.931449  505342 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:37:16.931515  505342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-821614
	I1025 10:37:16.934958  505342 addons.go:238] Setting addon default-storageclass=true in "auto-821614"
	I1025 10:37:16.935002  505342 host.go:66] Checking if "auto-821614" exists ...
	I1025 10:37:16.935428  505342 cli_runner.go:164] Run: docker container inspect auto-821614 --format={{.State.Status}}
	I1025 10:37:16.973929  505342 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:37:16.973951  505342 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:37:16.974012  505342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-821614
	I1025 10:37:16.979057  505342 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33472 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/auto-821614/id_rsa Username:docker}
	I1025 10:37:17.001570  505342 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33472 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/auto-821614/id_rsa Username:docker}
	I1025 10:37:17.274997  505342 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:37:17.403362  505342 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 10:37:17.403479  505342 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:37:17.437853  505342 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:37:18.229191  505342 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1025 10:37:18.231123  505342 node_ready.go:35] waiting up to 15m0s for node "auto-821614" to be "Ready" ...
	I1025 10:37:18.268001  505342 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1025 10:37:18.270568  505342 addons.go:514] duration metric: took 1.382886619s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1025 10:37:18.734321  505342 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-821614" context rescaled to 1 replicas
	W1025 10:37:20.234027  505342 node_ready.go:57] node "auto-821614" has "Ready":"False" status (will retry)
	W1025 10:37:22.234408  505342 node_ready.go:57] node "auto-821614" has "Ready":"False" status (will retry)
	W1025 10:37:24.234660  505342 node_ready.go:57] node "auto-821614" has "Ready":"False" status (will retry)
	W1025 10:37:26.735050  505342 node_ready.go:57] node "auto-821614" has "Ready":"False" status (will retry)
	W1025 10:37:29.236498  505342 node_ready.go:57] node "auto-821614" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 25 10:37:14 no-preload-768303 crio[650]: time="2025-10-25T10:37:14.444670486Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:37:14 no-preload-768303 crio[650]: time="2025-10-25T10:37:14.447867885Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:37:14 no-preload-768303 crio[650]: time="2025-10-25T10:37:14.447900542Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:37:14 no-preload-768303 crio[650]: time="2025-10-25T10:37:14.447922631Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:37:14 no-preload-768303 crio[650]: time="2025-10-25T10:37:14.450879288Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:37:14 no-preload-768303 crio[650]: time="2025-10-25T10:37:14.450912027Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:37:14 no-preload-768303 crio[650]: time="2025-10-25T10:37:14.450934304Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:37:14 no-preload-768303 crio[650]: time="2025-10-25T10:37:14.453915709Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:37:14 no-preload-768303 crio[650]: time="2025-10-25T10:37:14.453945354Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:37:14 no-preload-768303 crio[650]: time="2025-10-25T10:37:14.453966877Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:37:14 no-preload-768303 crio[650]: time="2025-10-25T10:37:14.456976671Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:37:14 no-preload-768303 crio[650]: time="2025-10-25T10:37:14.457006883Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:37:22 no-preload-768303 crio[650]: time="2025-10-25T10:37:22.576529668Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1f80d718-525a-4fb4-83e0-58b7abcb747b name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:37:22 no-preload-768303 crio[650]: time="2025-10-25T10:37:22.578174198Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=58c6e520-74d7-4504-a1ec-33414f869bf0 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:37:22 no-preload-768303 crio[650]: time="2025-10-25T10:37:22.57937187Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nrs74/dashboard-metrics-scraper" id=048b1a1a-3ee5-4db7-b5c2-873f3162c527 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:37:22 no-preload-768303 crio[650]: time="2025-10-25T10:37:22.579499388Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:37:22 no-preload-768303 crio[650]: time="2025-10-25T10:37:22.588307615Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:37:22 no-preload-768303 crio[650]: time="2025-10-25T10:37:22.588920523Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:37:22 no-preload-768303 crio[650]: time="2025-10-25T10:37:22.612547172Z" level=info msg="Created container 768de17eae2727dc2b38c910a646fc5e11509ff6fe18771502473f27c5a37897: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nrs74/dashboard-metrics-scraper" id=048b1a1a-3ee5-4db7-b5c2-873f3162c527 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:37:22 no-preload-768303 crio[650]: time="2025-10-25T10:37:22.614652657Z" level=info msg="Starting container: 768de17eae2727dc2b38c910a646fc5e11509ff6fe18771502473f27c5a37897" id=73a44d48-0b80-414d-8983-c77d21cdcb44 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:37:22 no-preload-768303 crio[650]: time="2025-10-25T10:37:22.617554921Z" level=info msg="Started container" PID=1730 containerID=768de17eae2727dc2b38c910a646fc5e11509ff6fe18771502473f27c5a37897 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nrs74/dashboard-metrics-scraper id=73a44d48-0b80-414d-8983-c77d21cdcb44 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8e61236222d1e60038ddbcd5b8358d3adf6764607fb33f8516701bd43c5b117f
	Oct 25 10:37:22 no-preload-768303 conmon[1728]: conmon 768de17eae2727dc2b38 <ninfo>: container 1730 exited with status 1
	Oct 25 10:37:22 no-preload-768303 crio[650]: time="2025-10-25T10:37:22.88135235Z" level=info msg="Removing container: 8b549cf2eee3d5046a9116d8c855221b6dc83ada103c3edf74198b275792b999" id=2b98fee2-de9c-485c-865e-94d8e1699143 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:37:22 no-preload-768303 crio[650]: time="2025-10-25T10:37:22.892849596Z" level=info msg="Error loading conmon cgroup of container 8b549cf2eee3d5046a9116d8c855221b6dc83ada103c3edf74198b275792b999: cgroup deleted" id=2b98fee2-de9c-485c-865e-94d8e1699143 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:37:22 no-preload-768303 crio[650]: time="2025-10-25T10:37:22.901601789Z" level=info msg="Removed container 8b549cf2eee3d5046a9116d8c855221b6dc83ada103c3edf74198b275792b999: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nrs74/dashboard-metrics-scraper" id=2b98fee2-de9c-485c-865e-94d8e1699143 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	768de17eae272       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago        Exited              dashboard-metrics-scraper   3                   8e61236222d1e       dashboard-metrics-scraper-6ffb444bf9-nrs74   kubernetes-dashboard
	62a15f1c7868d       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           27 seconds ago       Running             storage-provisioner         2                   703b97ffefd79       storage-provisioner                          kube-system
	9732113c248eb       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   42 seconds ago       Running             kubernetes-dashboard        0                   6b560b20454bd       kubernetes-dashboard-855c9754f9-mk9wc        kubernetes-dashboard
	73f8b7df780f0       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           57 seconds ago       Running             busybox                     1                   47edeaf4945aa       busybox                                      default
	44fb97e92f81b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           58 seconds ago       Running             coredns                     1                   b92c859be13db       coredns-66bc5c9577-xpwdq                     kube-system
	c8f46af3f17bd       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           58 seconds ago       Running             kindnet-cni                 1                   57128b7046b95       kindnet-gkbg7                                kube-system
	0492235313c1a       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           58 seconds ago       Exited              storage-provisioner         1                   703b97ffefd79       storage-provisioner                          kube-system
	403792b3f1ed4       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           58 seconds ago       Running             kube-proxy                  1                   b4065a08ddd60       kube-proxy-m9bnn                             kube-system
	c59a4eacffb62       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   dcc605674f701       kube-apiserver-no-preload-768303             kube-system
	c1fa525274c96       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   a110e08986d42       kube-scheduler-no-preload-768303             kube-system
	82f4a3c724831       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   62f33adbf551a       kube-controller-manager-no-preload-768303    kube-system
	29ccba364a872       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   2e799b1c5c6d7       etcd-no-preload-768303                       kube-system
	
	
	==> coredns [44fb97e92f81b6f58a2866e13945a4e276c3468dc6734864d6817b7fb99282a5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36790 - 11588 "HINFO IN 3328332253302026847.3606455303663770082. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030873242s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               no-preload-768303
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-768303
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=no-preload-768303
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_35_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:35:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-768303
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:37:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:37:03 +0000   Sat, 25 Oct 2025 10:35:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:37:03 +0000   Sat, 25 Oct 2025 10:35:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:37:03 +0000   Sat, 25 Oct 2025 10:35:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:37:03 +0000   Sat, 25 Oct 2025 10:35:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-768303
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                02b80f62-aa20-40d0-81a6-fccd316d79be
	  Boot ID:                    3729fb0b-b441-4078-a4f7-ae0fb40e9fa4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 coredns-66bc5c9577-xpwdq                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m2s
	  kube-system                 etcd-no-preload-768303                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m7s
	  kube-system                 kindnet-gkbg7                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m2s
	  kube-system                 kube-apiserver-no-preload-768303              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-controller-manager-no-preload-768303     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-proxy-m9bnn                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-scheduler-no-preload-768303              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-nrs74    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-mk9wc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m                     kube-proxy       
	  Normal   Starting                 56s                    kube-proxy       
	  Normal   Starting                 2m17s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m17s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m17s (x8 over 2m17s)  kubelet          Node no-preload-768303 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m17s (x8 over 2m17s)  kubelet          Node no-preload-768303 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m17s (x8 over 2m17s)  kubelet          Node no-preload-768303 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m8s                   kubelet          Node no-preload-768303 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m8s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m8s                   kubelet          Node no-preload-768303 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m8s                   kubelet          Node no-preload-768303 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m8s                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m3s                   node-controller  Node no-preload-768303 event: Registered Node no-preload-768303 in Controller
	  Normal   NodeReady                106s                   kubelet          Node no-preload-768303 status is now: NodeReady
	  Normal   Starting                 68s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 68s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  68s (x8 over 68s)      kubelet          Node no-preload-768303 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    68s (x8 over 68s)      kubelet          Node no-preload-768303 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     68s (x8 over 68s)      kubelet          Node no-preload-768303 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                    node-controller  Node no-preload-768303 event: Registered Node no-preload-768303 in Controller
	
	
	==> dmesg <==
	[Oct25 10:16] overlayfs: idmapped layers are currently not supported
	[ +24.917476] overlayfs: idmapped layers are currently not supported
	[Oct25 10:17] overlayfs: idmapped layers are currently not supported
	[ +27.290615] overlayfs: idmapped layers are currently not supported
	[Oct25 10:19] overlayfs: idmapped layers are currently not supported
	[Oct25 10:20] overlayfs: idmapped layers are currently not supported
	[Oct25 10:21] overlayfs: idmapped layers are currently not supported
	[Oct25 10:22] overlayfs: idmapped layers are currently not supported
	[ +31.000692] overlayfs: idmapped layers are currently not supported
	[Oct25 10:24] overlayfs: idmapped layers are currently not supported
	[Oct25 10:26] overlayfs: idmapped layers are currently not supported
	[Oct25 10:27] overlayfs: idmapped layers are currently not supported
	[Oct25 10:28] overlayfs: idmapped layers are currently not supported
	[ +15.077774] overlayfs: idmapped layers are currently not supported
	[Oct25 10:29] overlayfs: idmapped layers are currently not supported
	[Oct25 10:30] overlayfs: idmapped layers are currently not supported
	[Oct25 10:32] overlayfs: idmapped layers are currently not supported
	[ +37.840172] overlayfs: idmapped layers are currently not supported
	[Oct25 10:33] overlayfs: idmapped layers are currently not supported
	[Oct25 10:34] overlayfs: idmapped layers are currently not supported
	[ +38.146630] overlayfs: idmapped layers are currently not supported
	[Oct25 10:35] overlayfs: idmapped layers are currently not supported
	[Oct25 10:36] overlayfs: idmapped layers are currently not supported
	[  +9.574283] overlayfs: idmapped layers are currently not supported
	[Oct25 10:37] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [29ccba364a87269d3610c2a46d268a0cb3c524dfed251a515f0693f9f94692a9] <==
	{"level":"warn","ts":"2025-10-25T10:36:29.408085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:29.459423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:29.527621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:29.575501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:29.629466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:29.668407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:29.732310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:29.801196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:29.852832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:29.915450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:29.983796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:30.024249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:30.057141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:30.089536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:30.137015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:30.168111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:30.197782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:30.237911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:30.273532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:30.289497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:30.337840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:30.366365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:30.429592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:30.485249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:30.641356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37182","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:37:32 up  2:20,  0 user,  load average: 4.59, 4.12, 3.43
	Linux no-preload-768303 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c8f46af3f17bdb7311a5124e4ee22cdc269f9aca8899d31cda046d5330eb7dd0] <==
	I1025 10:36:34.224163       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:36:34.224596       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 10:36:34.224738       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:36:34.224750       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:36:34.224760       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:36:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:36:34.431088       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:36:34.431174       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:36:34.431373       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:36:34.432166       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 10:37:04.431293       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 10:37:04.432532       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 10:37:04.432642       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 10:37:04.432727       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1025 10:37:05.731744       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:37:05.731839       1 metrics.go:72] Registering metrics
	I1025 10:37:05.731914       1 controller.go:711] "Syncing nftables rules"
	I1025 10:37:14.435778       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:37:14.435833       1 main.go:301] handling current node
	I1025 10:37:24.430976       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:37:24.431006       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c59a4eacffb6261c64ec4a0dd5309bb830374bf2bc988c69c85ef5c9dce0ad2f] <==
	I1025 10:36:32.446987       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1025 10:36:32.455384       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 10:36:32.455468       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 10:36:32.466375       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1025 10:36:32.466709       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1025 10:36:32.466739       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 10:36:32.466773       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 10:36:32.468893       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:36:32.497543       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 10:36:32.500176       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1025 10:36:32.501314       1 aggregator.go:171] initial CRD sync complete...
	I1025 10:36:32.501338       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 10:36:32.501347       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 10:36:32.501355       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:36:32.725075       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:36:33.448823       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:36:33.602580       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 10:36:34.221489       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:36:34.515870       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:36:34.634351       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:36:34.947109       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.93.15"}
	I1025 10:36:35.012166       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.152.173"}
	I1025 10:36:36.955384       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:36:37.282466       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:36:37.331639       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [82f4a3c724831362a1c08516c0aef7bd256ae88928ed97fce85546159dfb6d88] <==
	I1025 10:36:36.956568       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 10:36:36.956698       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 10:36:36.962557       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 10:36:36.966782       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 10:36:36.967111       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 10:36:36.967781       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-768303"
	I1025 10:36:36.967922       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1025 10:36:36.969609       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:36:36.975048       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 10:36:36.975356       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 10:36:36.975623       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 10:36:36.977428       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 10:36:36.977514       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1025 10:36:36.977532       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 10:36:36.977938       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 10:36:36.980254       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 10:36:36.980386       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 10:36:36.984209       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 10:36:36.989059       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 10:36:36.994725       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 10:36:36.998031       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:36:36.999234       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 10:36:37.004322       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:36:37.004488       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 10:36:37.004523       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [403792b3f1ed46564bd4347a8a8647977de7599f4e850acc81992dbd9bc4e22b] <==
	I1025 10:36:35.472033       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:36:35.807990       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:36:35.909038       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:36:35.909147       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1025 10:36:35.909266       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:36:35.934522       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:36:35.934635       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:36:35.940784       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:36:35.941173       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:36:35.941370       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:36:35.942592       1 config.go:200] "Starting service config controller"
	I1025 10:36:35.942650       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:36:35.942693       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:36:35.942720       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:36:35.942756       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:36:35.942780       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:36:35.943804       1 config.go:309] "Starting node config controller"
	I1025 10:36:35.944715       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:36:35.944768       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:36:36.043668       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 10:36:36.043715       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 10:36:36.043677       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [c1fa525274c96926fd1852026a7ae2899e382b1b5a53998cb6b5bd410772f848] <==
	I1025 10:36:30.710980       1 serving.go:386] Generated self-signed cert in-memory
	I1025 10:36:35.670064       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 10:36:35.670102       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:36:35.689294       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 10:36:35.689388       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1025 10:36:35.689417       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1025 10:36:35.689449       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 10:36:35.700277       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:36:35.700425       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:36:35.700473       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:36:35.700505       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:36:35.789710       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1025 10:36:35.801585       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:36:35.801735       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:36:44 no-preload-768303 kubelet[773]: I1025 10:36:44.760703     773 scope.go:117] "RemoveContainer" containerID="98071963c73700d8860d3870556be774087396575a1141aac0ca689a0a18b6cd"
	Oct 25 10:36:45 no-preload-768303 kubelet[773]: I1025 10:36:45.767040     773 scope.go:117] "RemoveContainer" containerID="98071963c73700d8860d3870556be774087396575a1141aac0ca689a0a18b6cd"
	Oct 25 10:36:45 no-preload-768303 kubelet[773]: I1025 10:36:45.767375     773 scope.go:117] "RemoveContainer" containerID="a6b0a744e82dae498b8d2eb60b9729adaa260f09ee9c700c6258ae48d1517932"
	Oct 25 10:36:45 no-preload-768303 kubelet[773]: E1025 10:36:45.767741     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nrs74_kubernetes-dashboard(b23abda7-8857-415a-a8da-7b89d29698f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nrs74" podUID="b23abda7-8857-415a-a8da-7b89d29698f1"
	Oct 25 10:36:46 no-preload-768303 kubelet[773]: I1025 10:36:46.770795     773 scope.go:117] "RemoveContainer" containerID="a6b0a744e82dae498b8d2eb60b9729adaa260f09ee9c700c6258ae48d1517932"
	Oct 25 10:36:46 no-preload-768303 kubelet[773]: E1025 10:36:46.770941     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nrs74_kubernetes-dashboard(b23abda7-8857-415a-a8da-7b89d29698f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nrs74" podUID="b23abda7-8857-415a-a8da-7b89d29698f1"
	Oct 25 10:36:47 no-preload-768303 kubelet[773]: I1025 10:36:47.786952     773 scope.go:117] "RemoveContainer" containerID="a6b0a744e82dae498b8d2eb60b9729adaa260f09ee9c700c6258ae48d1517932"
	Oct 25 10:36:47 no-preload-768303 kubelet[773]: E1025 10:36:47.787515     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nrs74_kubernetes-dashboard(b23abda7-8857-415a-a8da-7b89d29698f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nrs74" podUID="b23abda7-8857-415a-a8da-7b89d29698f1"
	Oct 25 10:36:58 no-preload-768303 kubelet[773]: I1025 10:36:58.577245     773 scope.go:117] "RemoveContainer" containerID="a6b0a744e82dae498b8d2eb60b9729adaa260f09ee9c700c6258ae48d1517932"
	Oct 25 10:36:58 no-preload-768303 kubelet[773]: I1025 10:36:58.815022     773 scope.go:117] "RemoveContainer" containerID="a6b0a744e82dae498b8d2eb60b9729adaa260f09ee9c700c6258ae48d1517932"
	Oct 25 10:36:59 no-preload-768303 kubelet[773]: I1025 10:36:59.819468     773 scope.go:117] "RemoveContainer" containerID="8b549cf2eee3d5046a9116d8c855221b6dc83ada103c3edf74198b275792b999"
	Oct 25 10:36:59 no-preload-768303 kubelet[773]: E1025 10:36:59.820116     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nrs74_kubernetes-dashboard(b23abda7-8857-415a-a8da-7b89d29698f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nrs74" podUID="b23abda7-8857-415a-a8da-7b89d29698f1"
	Oct 25 10:36:59 no-preload-768303 kubelet[773]: I1025 10:36:59.840970     773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mk9wc" podStartSLOduration=10.983475215 podStartE2EDuration="22.840860823s" podCreationTimestamp="2025-10-25 10:36:37 +0000 UTC" firstStartedPulling="2025-10-25 10:36:37.818198793 +0000 UTC m=+13.694419095" lastFinishedPulling="2025-10-25 10:36:49.675584401 +0000 UTC m=+25.551804703" observedRunningTime="2025-10-25 10:36:49.832047094 +0000 UTC m=+25.708267404" watchObservedRunningTime="2025-10-25 10:36:59.840860823 +0000 UTC m=+35.717081150"
	Oct 25 10:37:04 no-preload-768303 kubelet[773]: I1025 10:37:04.834367     773 scope.go:117] "RemoveContainer" containerID="0492235313c1aebca4f7685caee08085cf57173a0fa4189ed3a0c0b1bb9f3538"
	Oct 25 10:37:07 no-preload-768303 kubelet[773]: I1025 10:37:07.766894     773 scope.go:117] "RemoveContainer" containerID="8b549cf2eee3d5046a9116d8c855221b6dc83ada103c3edf74198b275792b999"
	Oct 25 10:37:07 no-preload-768303 kubelet[773]: E1025 10:37:07.767551     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nrs74_kubernetes-dashboard(b23abda7-8857-415a-a8da-7b89d29698f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nrs74" podUID="b23abda7-8857-415a-a8da-7b89d29698f1"
	Oct 25 10:37:22 no-preload-768303 kubelet[773]: I1025 10:37:22.575753     773 scope.go:117] "RemoveContainer" containerID="8b549cf2eee3d5046a9116d8c855221b6dc83ada103c3edf74198b275792b999"
	Oct 25 10:37:22 no-preload-768303 kubelet[773]: I1025 10:37:22.879407     773 scope.go:117] "RemoveContainer" containerID="8b549cf2eee3d5046a9116d8c855221b6dc83ada103c3edf74198b275792b999"
	Oct 25 10:37:22 no-preload-768303 kubelet[773]: I1025 10:37:22.879681     773 scope.go:117] "RemoveContainer" containerID="768de17eae2727dc2b38c910a646fc5e11509ff6fe18771502473f27c5a37897"
	Oct 25 10:37:22 no-preload-768303 kubelet[773]: E1025 10:37:22.879836     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nrs74_kubernetes-dashboard(b23abda7-8857-415a-a8da-7b89d29698f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nrs74" podUID="b23abda7-8857-415a-a8da-7b89d29698f1"
	Oct 25 10:37:27 no-preload-768303 kubelet[773]: I1025 10:37:27.755022     773 scope.go:117] "RemoveContainer" containerID="768de17eae2727dc2b38c910a646fc5e11509ff6fe18771502473f27c5a37897"
	Oct 25 10:37:27 no-preload-768303 kubelet[773]: E1025 10:37:27.755814     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nrs74_kubernetes-dashboard(b23abda7-8857-415a-a8da-7b89d29698f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nrs74" podUID="b23abda7-8857-415a-a8da-7b89d29698f1"
	Oct 25 10:37:29 no-preload-768303 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 10:37:29 no-preload-768303 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 10:37:29 no-preload-768303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [9732113c248ebd098cdf4f6f6e91edb5873b14fea51851da7264013a9aacb532] <==
	2025/10/25 10:36:49 Using namespace: kubernetes-dashboard
	2025/10/25 10:36:49 Using in-cluster config to connect to apiserver
	2025/10/25 10:36:49 Using secret token for csrf signing
	2025/10/25 10:36:49 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 10:36:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 10:36:49 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 10:36:49 Generating JWE encryption key
	2025/10/25 10:36:49 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 10:36:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 10:36:50 Initializing JWE encryption key from synchronized object
	2025/10/25 10:36:50 Creating in-cluster Sidecar client
	2025/10/25 10:36:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:36:50 Serving insecurely on HTTP port: 9090
	2025/10/25 10:37:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:36:49 Starting overwatch
	
	
	==> storage-provisioner [0492235313c1aebca4f7685caee08085cf57173a0fa4189ed3a0c0b1bb9f3538] <==
	I1025 10:36:34.761969       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 10:37:04.768810       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [62a15f1c7868d04631806759f4487bee1b2c75b4a3a11adc84948d3d78dc6a31] <==
	I1025 10:37:04.977303       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1025 10:37:04.977452       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1025 10:37:04.980483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:37:08.436040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:37:12.697137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:37:16.294992       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:37:19.348954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:37:22.371268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:37:22.376217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 10:37:22.376936       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 10:37:22.376993       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"911c92e5-c16f-402a-9e0d-e46ef78d17f2", APIVersion:"v1", ResourceVersion:"680", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-768303_5a9e2cf2-2802-4474-ace9-ecf8a5febe6f became leader
	I1025 10:37:22.377196       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-768303_5a9e2cf2-2802-4474-ace9-ecf8a5febe6f!
	W1025 10:37:22.386127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:37:22.389411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 10:37:22.477682       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-768303_5a9e2cf2-2802-4474-ace9-ecf8a5febe6f!
	W1025 10:37:24.392642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:37:24.400112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:37:26.403970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:37:26.408304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:37:28.412018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:37:28.418400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:37:30.421691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:37:30.426952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:37:32.430167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:37:32.440371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-768303 -n no-preload-768303
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-768303 -n no-preload-768303: exit status 2 (382.521568ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-768303 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-768303
helpers_test.go:243: (dbg) docker inspect no-preload-768303:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9b0b6c2f298a351a13b7790ae51ec5d870ae3d00b4490d4a3b094eb081d986c1",
	        "Created": "2025-10-25T10:34:41.024753053Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 501928,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-25T10:36:15.540152461Z",
	            "FinishedAt": "2025-10-25T10:36:14.437999407Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/9b0b6c2f298a351a13b7790ae51ec5d870ae3d00b4490d4a3b094eb081d986c1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9b0b6c2f298a351a13b7790ae51ec5d870ae3d00b4490d4a3b094eb081d986c1/hostname",
	        "HostsPath": "/var/lib/docker/containers/9b0b6c2f298a351a13b7790ae51ec5d870ae3d00b4490d4a3b094eb081d986c1/hosts",
	        "LogPath": "/var/lib/docker/containers/9b0b6c2f298a351a13b7790ae51ec5d870ae3d00b4490d4a3b094eb081d986c1/9b0b6c2f298a351a13b7790ae51ec5d870ae3d00b4490d4a3b094eb081d986c1-json.log",
	        "Name": "/no-preload-768303",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-768303:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-768303",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9b0b6c2f298a351a13b7790ae51ec5d870ae3d00b4490d4a3b094eb081d986c1",
	                "LowerDir": "/var/lib/docker/overlay2/731d2c060601c6df8e49fb046de626ce25a10794f565bba2c8a91fcca1bcf499-init/diff:/var/lib/docker/overlay2/56244b4f60319ba406f5ebe406da3f45bd5764c167111669ae2d84794cf94518/diff",
	                "MergedDir": "/var/lib/docker/overlay2/731d2c060601c6df8e49fb046de626ce25a10794f565bba2c8a91fcca1bcf499/merged",
	                "UpperDir": "/var/lib/docker/overlay2/731d2c060601c6df8e49fb046de626ce25a10794f565bba2c8a91fcca1bcf499/diff",
	                "WorkDir": "/var/lib/docker/overlay2/731d2c060601c6df8e49fb046de626ce25a10794f565bba2c8a91fcca1bcf499/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-768303",
	                "Source": "/var/lib/docker/volumes/no-preload-768303/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-768303",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-768303",
	                "name.minikube.sigs.k8s.io": "no-preload-768303",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "291066f16b765922cd121e97b3777d475683376e68ee721e072d2bd070aeac32",
	            "SandboxKey": "/var/run/docker/netns/291066f16b76",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33468"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33471"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33469"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33470"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-768303": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:a4:d1:a7:15:bd",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "859ef893d5c6367e34b4500fcc3b03774bcaafce1067944be65176cec7fd385b",
	                    "EndpointID": "64a0c79f08e2210d0a6daf2aadbd7e90a53b714ad252e6c947e4aea5e37de05f",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-768303",
	                        "9b0b6c2f298a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-768303 -n no-preload-768303
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-768303 -n no-preload-768303: exit status 2 (378.431771ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-768303 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-768303 logs -n 25: (1.333741558s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p default-k8s-diff-port-204074 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-204074                                                                                                                                                                                                               │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ delete  │ -p default-k8s-diff-port-204074                                                                                                                                                                                                               │ default-k8s-diff-port-204074 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ delete  │ -p disable-driver-mounts-533631                                                                                                                                                                                                               │ disable-driver-mounts-533631 │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:34 UTC │
	│ start   │ -p no-preload-768303 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-768303            │ jenkins │ v1.37.0 │ 25 Oct 25 10:34 UTC │ 25 Oct 25 10:35 UTC │
	│ image   │ embed-certs-419185 image list --format=json                                                                                                                                                                                                   │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │ 25 Oct 25 10:35 UTC │
	│ pause   │ -p embed-certs-419185 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │                     │
	│ delete  │ -p embed-certs-419185                                                                                                                                                                                                                         │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │ 25 Oct 25 10:35 UTC │
	│ delete  │ -p embed-certs-419185                                                                                                                                                                                                                         │ embed-certs-419185           │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │ 25 Oct 25 10:35 UTC │
	│ start   │ -p newest-cni-491554 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-491554            │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │ 25 Oct 25 10:36 UTC │
	│ addons  │ enable metrics-server -p no-preload-768303 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-768303            │ jenkins │ v1.37.0 │ 25 Oct 25 10:35 UTC │                     │
	│ stop    │ -p no-preload-768303 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-768303            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │ 25 Oct 25 10:36 UTC │
	│ addons  │ enable metrics-server -p newest-cni-491554 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-491554            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │                     │
	│ stop    │ -p newest-cni-491554 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-491554            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │ 25 Oct 25 10:36 UTC │
	│ addons  │ enable dashboard -p newest-cni-491554 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-491554            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │ 25 Oct 25 10:36 UTC │
	│ start   │ -p newest-cni-491554 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-491554            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │ 25 Oct 25 10:36 UTC │
	│ addons  │ enable dashboard -p no-preload-768303 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-768303            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │ 25 Oct 25 10:36 UTC │
	│ start   │ -p no-preload-768303 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-768303            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │ 25 Oct 25 10:37 UTC │
	│ image   │ newest-cni-491554 image list --format=json                                                                                                                                                                                                    │ newest-cni-491554            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │ 25 Oct 25 10:36 UTC │
	│ pause   │ -p newest-cni-491554 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-491554            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │                     │
	│ delete  │ -p newest-cni-491554                                                                                                                                                                                                                          │ newest-cni-491554            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │ 25 Oct 25 10:36 UTC │
	│ delete  │ -p newest-cni-491554                                                                                                                                                                                                                          │ newest-cni-491554            │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │ 25 Oct 25 10:36 UTC │
	│ start   │ -p auto-821614 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-821614                  │ jenkins │ v1.37.0 │ 25 Oct 25 10:36 UTC │                     │
	│ image   │ no-preload-768303 image list --format=json                                                                                                                                                                                                    │ no-preload-768303            │ jenkins │ v1.37.0 │ 25 Oct 25 10:37 UTC │ 25 Oct 25 10:37 UTC │
	│ pause   │ -p no-preload-768303 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-768303            │ jenkins │ v1.37.0 │ 25 Oct 25 10:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 10:36:36
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 10:36:36.135575  505342 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:36:36.135733  505342 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:36:36.135759  505342 out.go:374] Setting ErrFile to fd 2...
	I1025 10:36:36.135767  505342 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:36:36.136138  505342 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 10:36:36.136606  505342 out.go:368] Setting JSON to false
	I1025 10:36:36.137603  505342 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8346,"bootTime":1761380250,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1025 10:36:36.137671  505342 start.go:141] virtualization:  
	I1025 10:36:36.142119  505342 out.go:179] * [auto-821614] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:36:36.145590  505342 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 10:36:36.145653  505342 notify.go:220] Checking for updates...
	I1025 10:36:36.154497  505342 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:36:36.157628  505342 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:36:36.161334  505342 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-292167/.minikube
	I1025 10:36:36.164389  505342 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:36:36.167429  505342 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:36:36.170865  505342 config.go:182] Loaded profile config "no-preload-768303": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:36:36.171045  505342 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:36:36.200561  505342 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:36:36.200685  505342 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:36:36.277513  505342 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-25 10:36:36.266304223 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:36:36.277620  505342 docker.go:318] overlay module found
	I1025 10:36:36.280929  505342 out.go:179] * Using the docker driver based on user configuration
	I1025 10:36:36.283729  505342 start.go:305] selected driver: docker
	I1025 10:36:36.283750  505342 start.go:925] validating driver "docker" against <nil>
	I1025 10:36:36.283765  505342 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:36:36.284472  505342 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:36:36.344722  505342 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-25 10:36:36.335982345 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:36:36.344879  505342 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 10:36:36.345112  505342 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:36:36.348019  505342 out.go:179] * Using Docker driver with root privileges
	I1025 10:36:36.350865  505342 cni.go:84] Creating CNI manager for ""
	I1025 10:36:36.350935  505342 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:36:36.350950  505342 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 10:36:36.351036  505342 start.go:349] cluster config:
	{Name:auto-821614 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-821614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1025 10:36:36.354165  505342 out.go:179] * Starting "auto-821614" primary control-plane node in "auto-821614" cluster
	I1025 10:36:36.356964  505342 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 10:36:36.359885  505342 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1025 10:36:36.362708  505342 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:36:36.362757  505342 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1025 10:36:36.362769  505342 cache.go:58] Caching tarball of preloaded images
	I1025 10:36:36.362795  505342 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 10:36:36.362873  505342 preload.go:233] Found /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1025 10:36:36.362884  505342 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1025 10:36:36.362993  505342 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/config.json ...
	I1025 10:36:36.363018  505342 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/config.json: {Name:mk288058acd38774af24281c2331edd55139cb6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:36:36.384376  505342 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1025 10:36:36.384400  505342 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1025 10:36:36.384419  505342 cache.go:232] Successfully downloaded all kic artifacts
	I1025 10:36:36.384447  505342 start.go:360] acquireMachinesLock for auto-821614: {Name:mkde8fde4ed6117cd610f36937ca0e2ebed9ded6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1025 10:36:36.384595  505342 start.go:364] duration metric: took 130.455µs to acquireMachinesLock for "auto-821614"
	I1025 10:36:36.384626  505342 start.go:93] Provisioning new machine with config: &{Name:auto-821614 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-821614 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:36:36.384702  505342 start.go:125] createHost starting for "" (driver="docker")
	I1025 10:36:35.151969  501769 system_pods.go:86] 8 kube-system pods found
	I1025 10:36:35.152003  501769 system_pods.go:89] "coredns-66bc5c9577-xpwdq" [0aaecbc4-29e5-45e1-ad80-b2465476ab96] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1025 10:36:35.152014  501769 system_pods.go:89] "etcd-no-preload-768303" [dc4f2360-5fc6-4c24-838f-e552e5061d50] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1025 10:36:35.152020  501769 system_pods.go:89] "kindnet-gkbg7" [2844e492-0201-4963-9c6c-74f19df0adea] Running
	I1025 10:36:35.152027  501769 system_pods.go:89] "kube-apiserver-no-preload-768303" [1889e503-da89-437e-b901-e173c80ee724] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1025 10:36:35.152033  501769 system_pods.go:89] "kube-controller-manager-no-preload-768303" [cc11bec4-5fcb-419f-a007-3023c54e5bd0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1025 10:36:35.152038  501769 system_pods.go:89] "kube-proxy-m9bnn" [d1ef05c2-0d0d-43f8-9bb8-f77839881a24] Running
	I1025 10:36:35.152049  501769 system_pods.go:89] "kube-scheduler-no-preload-768303" [bb7736c6-4301-4202-b122-d1fb345fb94a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1025 10:36:35.152054  501769 system_pods.go:89] "storage-provisioner" [89da7f26-c2be-43b2-817c-6c2621a97a30] Running
	I1025 10:36:35.152061  501769 system_pods.go:126] duration metric: took 93.937402ms to wait for k8s-apps to be running ...
	I1025 10:36:35.152069  501769 system_svc.go:44] waiting for kubelet service to be running ....
	I1025 10:36:35.152130  501769 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:36:35.233298  501769 system_svc.go:56] duration metric: took 81.217653ms WaitForService to wait for kubelet
	I1025 10:36:35.233323  501769 kubeadm.go:586] duration metric: took 9.535378047s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1025 10:36:35.233343  501769 node_conditions.go:102] verifying NodePressure condition ...
	I1025 10:36:35.246621  501769 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1025 10:36:35.246651  501769 node_conditions.go:123] node cpu capacity is 2
	I1025 10:36:35.246663  501769 node_conditions.go:105] duration metric: took 13.314622ms to run NodePressure ...
	I1025 10:36:35.246675  501769 start.go:241] waiting for startup goroutines ...
	I1025 10:36:35.246683  501769 start.go:246] waiting for cluster config update ...
	I1025 10:36:35.246693  501769 start.go:255] writing updated cluster config ...
	I1025 10:36:35.246949  501769 ssh_runner.go:195] Run: rm -f paused
	I1025 10:36:35.251271  501769 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:36:35.259782  501769 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-xpwdq" in "kube-system" namespace to be "Ready" or be gone ...
	W1025 10:36:37.266355  501769 pod_ready.go:104] pod "coredns-66bc5c9577-xpwdq" is not "Ready", error: <nil>
	W1025 10:36:39.266742  501769 pod_ready.go:104] pod "coredns-66bc5c9577-xpwdq" is not "Ready", error: <nil>
	I1025 10:36:36.388128  505342 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1025 10:36:36.388358  505342 start.go:159] libmachine.API.Create for "auto-821614" (driver="docker")
	I1025 10:36:36.388407  505342 client.go:168] LocalClient.Create starting
	I1025 10:36:36.388482  505342 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem
	I1025 10:36:36.388520  505342 main.go:141] libmachine: Decoding PEM data...
	I1025 10:36:36.388555  505342 main.go:141] libmachine: Parsing certificate...
	I1025 10:36:36.388616  505342 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem
	I1025 10:36:36.388650  505342 main.go:141] libmachine: Decoding PEM data...
	I1025 10:36:36.388665  505342 main.go:141] libmachine: Parsing certificate...
	I1025 10:36:36.389061  505342 cli_runner.go:164] Run: docker network inspect auto-821614 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1025 10:36:36.409944  505342 cli_runner.go:211] docker network inspect auto-821614 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1025 10:36:36.410026  505342 network_create.go:284] running [docker network inspect auto-821614] to gather additional debugging logs...
	I1025 10:36:36.410053  505342 cli_runner.go:164] Run: docker network inspect auto-821614
	W1025 10:36:36.430843  505342 cli_runner.go:211] docker network inspect auto-821614 returned with exit code 1
	I1025 10:36:36.430872  505342 network_create.go:287] error running [docker network inspect auto-821614]: docker network inspect auto-821614: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-821614 not found
	I1025 10:36:36.430887  505342 network_create.go:289] output of [docker network inspect auto-821614]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-821614 not found
	
	** /stderr **
	I1025 10:36:36.430977  505342 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:36:36.451344  505342 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-101b69e1e09b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d2:49:dd:18:ab:21} reservation:<nil>}
	I1025 10:36:36.451607  505342 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fee160f0176f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:62:3b:42:f8:f5:b2} reservation:<nil>}
	I1025 10:36:36.452002  505342 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5368f13f34ad IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2e:30:83:3d:80:30} reservation:<nil>}
	I1025 10:36:36.452474  505342 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019f2aa0}
	I1025 10:36:36.452503  505342 network_create.go:124] attempt to create docker network auto-821614 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1025 10:36:36.452580  505342 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-821614 auto-821614
	I1025 10:36:36.517464  505342 network_create.go:108] docker network auto-821614 192.168.76.0/24 created
	I1025 10:36:36.517498  505342 kic.go:121] calculated static IP "192.168.76.2" for the "auto-821614" container
	I1025 10:36:36.517571  505342 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1025 10:36:36.536737  505342 cli_runner.go:164] Run: docker volume create auto-821614 --label name.minikube.sigs.k8s.io=auto-821614 --label created_by.minikube.sigs.k8s.io=true
	I1025 10:36:36.553993  505342 oci.go:103] Successfully created a docker volume auto-821614
	I1025 10:36:36.554096  505342 cli_runner.go:164] Run: docker run --rm --name auto-821614-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-821614 --entrypoint /usr/bin/test -v auto-821614:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1025 10:36:37.192202  505342 oci.go:107] Successfully prepared a docker volume auto-821614
	I1025 10:36:37.192256  505342 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:36:37.192276  505342 kic.go:194] Starting extracting preloaded images to volume ...
	I1025 10:36:37.192344  505342 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-821614:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1025 10:36:41.767939  501769 pod_ready.go:104] pod "coredns-66bc5c9577-xpwdq" is not "Ready", error: <nil>
	W1025 10:36:43.796269  501769 pod_ready.go:104] pod "coredns-66bc5c9577-xpwdq" is not "Ready", error: <nil>
	I1025 10:36:42.692814  505342 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-821614:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.500430282s)
	I1025 10:36:42.692841  505342 kic.go:203] duration metric: took 5.500561797s to extract preloaded images to volume ...
	W1025 10:36:42.692974  505342 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1025 10:36:42.693076  505342 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1025 10:36:42.773055  505342 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-821614 --name auto-821614 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-821614 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-821614 --network auto-821614 --ip 192.168.76.2 --volume auto-821614:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1025 10:36:43.267634  505342 cli_runner.go:164] Run: docker container inspect auto-821614 --format={{.State.Running}}
	I1025 10:36:43.289144  505342 cli_runner.go:164] Run: docker container inspect auto-821614 --format={{.State.Status}}
	I1025 10:36:43.317858  505342 cli_runner.go:164] Run: docker exec auto-821614 stat /var/lib/dpkg/alternatives/iptables
	I1025 10:36:43.384132  505342 oci.go:144] the created container "auto-821614" has a running status.
	I1025 10:36:43.384166  505342 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21794-292167/.minikube/machines/auto-821614/id_rsa...
	I1025 10:36:44.271194  505342 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21794-292167/.minikube/machines/auto-821614/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1025 10:36:44.291567  505342 cli_runner.go:164] Run: docker container inspect auto-821614 --format={{.State.Status}}
	I1025 10:36:44.313919  505342 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1025 10:36:44.313942  505342 kic_runner.go:114] Args: [docker exec --privileged auto-821614 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1025 10:36:44.358750  505342 cli_runner.go:164] Run: docker container inspect auto-821614 --format={{.State.Status}}
	I1025 10:36:44.376634  505342 machine.go:93] provisionDockerMachine start ...
	I1025 10:36:44.376734  505342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-821614
	I1025 10:36:44.393789  505342 main.go:141] libmachine: Using SSH client type: native
	I1025 10:36:44.394133  505342 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33472 <nil> <nil>}
	I1025 10:36:44.394151  505342 main.go:141] libmachine: About to run SSH command:
	hostname
	I1025 10:36:44.394824  505342 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	W1025 10:36:46.265071  501769 pod_ready.go:104] pod "coredns-66bc5c9577-xpwdq" is not "Ready", error: <nil>
	W1025 10:36:48.267669  501769 pod_ready.go:104] pod "coredns-66bc5c9577-xpwdq" is not "Ready", error: <nil>
	I1025 10:36:47.551177  505342 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-821614
	
	I1025 10:36:47.551201  505342 ubuntu.go:182] provisioning hostname "auto-821614"
	I1025 10:36:47.551262  505342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-821614
	I1025 10:36:47.570622  505342 main.go:141] libmachine: Using SSH client type: native
	I1025 10:36:47.570923  505342 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33472 <nil> <nil>}
	I1025 10:36:47.570941  505342 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-821614 && echo "auto-821614" | sudo tee /etc/hostname
	I1025 10:36:47.748090  505342 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-821614
	
	I1025 10:36:47.748168  505342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-821614
	I1025 10:36:47.770523  505342 main.go:141] libmachine: Using SSH client type: native
	I1025 10:36:47.770840  505342 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33472 <nil> <nil>}
	I1025 10:36:47.770859  505342 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-821614' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-821614/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-821614' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1025 10:36:47.936885  505342 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1025 10:36:47.936972  505342 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21794-292167/.minikube CaCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21794-292167/.minikube}
	I1025 10:36:47.937032  505342 ubuntu.go:190] setting up certificates
	I1025 10:36:47.937063  505342 provision.go:84] configureAuth start
	I1025 10:36:47.937159  505342 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-821614
	I1025 10:36:47.962276  505342 provision.go:143] copyHostCerts
	I1025 10:36:47.962334  505342 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem, removing ...
	I1025 10:36:47.962343  505342 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem
	I1025 10:36:47.962409  505342 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/ca.pem (1082 bytes)
	I1025 10:36:47.962496  505342 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem, removing ...
	I1025 10:36:47.962501  505342 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem
	I1025 10:36:47.962528  505342 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/cert.pem (1123 bytes)
	I1025 10:36:47.962587  505342 exec_runner.go:144] found /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem, removing ...
	I1025 10:36:47.962591  505342 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem
	I1025 10:36:47.962615  505342 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21794-292167/.minikube/key.pem (1675 bytes)
	I1025 10:36:47.962663  505342 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem org=jenkins.auto-821614 san=[127.0.0.1 192.168.76.2 auto-821614 localhost minikube]
	I1025 10:36:48.048165  505342 provision.go:177] copyRemoteCerts
	I1025 10:36:48.048276  505342 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1025 10:36:48.048350  505342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-821614
	I1025 10:36:48.067398  505342 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33472 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/auto-821614/id_rsa Username:docker}
	I1025 10:36:48.188532  505342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1025 10:36:48.222538  505342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1025 10:36:48.248790  505342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1025 10:36:48.281212  505342 provision.go:87] duration metric: took 344.114244ms to configureAuth
	I1025 10:36:48.281289  505342 ubuntu.go:206] setting minikube options for container-runtime
	I1025 10:36:48.281504  505342 config.go:182] Loaded profile config "auto-821614": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:36:48.281651  505342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-821614
	I1025 10:36:48.302767  505342 main.go:141] libmachine: Using SSH client type: native
	I1025 10:36:48.303066  505342 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33472 <nil> <nil>}
	I1025 10:36:48.303088  505342 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1025 10:36:48.679395  505342 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1025 10:36:48.679464  505342 machine.go:96] duration metric: took 4.302806151s to provisionDockerMachine
	I1025 10:36:48.679545  505342 client.go:171] duration metric: took 12.291125842s to LocalClient.Create
	I1025 10:36:48.679581  505342 start.go:167] duration metric: took 12.291223731s to libmachine.API.Create "auto-821614"
	I1025 10:36:48.679606  505342 start.go:293] postStartSetup for "auto-821614" (driver="docker")
	I1025 10:36:48.679647  505342 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1025 10:36:48.679752  505342 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1025 10:36:48.679823  505342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-821614
	I1025 10:36:48.712328  505342 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33472 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/auto-821614/id_rsa Username:docker}
	I1025 10:36:48.832032  505342 ssh_runner.go:195] Run: cat /etc/os-release
	I1025 10:36:48.837415  505342 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1025 10:36:48.837467  505342 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1025 10:36:48.837483  505342 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/addons for local assets ...
	I1025 10:36:48.837541  505342 filesync.go:126] Scanning /home/jenkins/minikube-integration/21794-292167/.minikube/files for local assets ...
	I1025 10:36:48.837622  505342 filesync.go:149] local asset: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem -> 2940172.pem in /etc/ssl/certs
	I1025 10:36:48.837732  505342 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1025 10:36:48.846245  505342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 10:36:48.867220  505342 start.go:296] duration metric: took 187.569071ms for postStartSetup
	I1025 10:36:48.867568  505342 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-821614
	I1025 10:36:48.888365  505342 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/config.json ...
	I1025 10:36:48.888643  505342 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:36:48.888693  505342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-821614
	I1025 10:36:48.912295  505342 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33472 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/auto-821614/id_rsa Username:docker}
	I1025 10:36:49.022582  505342 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1025 10:36:49.027709  505342 start.go:128] duration metric: took 12.642992569s to createHost
	I1025 10:36:49.027736  505342 start.go:83] releasing machines lock for "auto-821614", held for 12.643126406s
	I1025 10:36:49.027808  505342 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-821614
	I1025 10:36:49.054506  505342 ssh_runner.go:195] Run: cat /version.json
	I1025 10:36:49.054563  505342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-821614
	I1025 10:36:49.054803  505342 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1025 10:36:49.054855  505342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-821614
	I1025 10:36:49.089653  505342 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33472 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/auto-821614/id_rsa Username:docker}
	I1025 10:36:49.098380  505342 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33472 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/auto-821614/id_rsa Username:docker}
	I1025 10:36:49.215804  505342 ssh_runner.go:195] Run: systemctl --version
	I1025 10:36:49.343721  505342 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1025 10:36:49.407664  505342 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1025 10:36:49.413454  505342 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1025 10:36:49.413532  505342 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1025 10:36:49.464242  505342 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1025 10:36:49.464283  505342 start.go:495] detecting cgroup driver to use...
	I1025 10:36:49.464322  505342 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1025 10:36:49.464387  505342 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1025 10:36:49.502561  505342 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1025 10:36:49.517335  505342 docker.go:218] disabling cri-docker service (if available) ...
	I1025 10:36:49.517420  505342 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1025 10:36:49.537985  505342 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1025 10:36:49.557092  505342 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1025 10:36:49.706713  505342 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1025 10:36:49.905689  505342 docker.go:234] disabling docker service ...
	I1025 10:36:49.905827  505342 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1025 10:36:49.935868  505342 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1025 10:36:49.958225  505342 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1025 10:36:50.105146  505342 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1025 10:36:50.300074  505342 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1025 10:36:50.320435  505342 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1025 10:36:50.341247  505342 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1025 10:36:50.341387  505342 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:36:50.354300  505342 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1025 10:36:50.354462  505342 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:36:50.366673  505342 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:36:50.378583  505342 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:36:50.390598  505342 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1025 10:36:50.402837  505342 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:36:50.415264  505342 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:36:50.436096  505342 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1025 10:36:50.449687  505342 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1025 10:36:50.461394  505342 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1025 10:36:50.472485  505342 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:36:50.608427  505342 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1025 10:36:50.875976  505342 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1025 10:36:50.876060  505342 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1025 10:36:50.880441  505342 start.go:563] Will wait 60s for crictl version
	I1025 10:36:50.880552  505342 ssh_runner.go:195] Run: which crictl
	I1025 10:36:50.884431  505342 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1025 10:36:50.913645  505342 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1025 10:36:50.913754  505342 ssh_runner.go:195] Run: crio --version
	I1025 10:36:50.948405  505342 ssh_runner.go:195] Run: crio --version
	I1025 10:36:50.981757  505342 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1025 10:36:50.984813  505342 cli_runner.go:164] Run: docker network inspect auto-821614 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1025 10:36:51.006635  505342 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1025 10:36:51.011088  505342 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:36:51.023089  505342 kubeadm.go:883] updating cluster {Name:auto-821614 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-821614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1025 10:36:51.023236  505342 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1025 10:36:51.023291  505342 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:36:51.057107  505342 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:36:51.057131  505342 crio.go:433] Images already preloaded, skipping extraction
	I1025 10:36:51.057201  505342 ssh_runner.go:195] Run: sudo crictl images --output json
	I1025 10:36:51.088626  505342 crio.go:514] all images are preloaded for cri-o runtime.
	I1025 10:36:51.088651  505342 cache_images.go:85] Images are preloaded, skipping loading
	I1025 10:36:51.088659  505342 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1025 10:36:51.088769  505342 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-821614 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-821614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1025 10:36:51.088858  505342 ssh_runner.go:195] Run: crio config
	I1025 10:36:51.149441  505342 cni.go:84] Creating CNI manager for ""
	I1025 10:36:51.149462  505342 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:36:51.149476  505342 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1025 10:36:51.149501  505342 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-821614 NodeName:auto-821614 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/
manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1025 10:36:51.149633  505342 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-821614"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1025 10:36:51.149705  505342 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1025 10:36:51.159340  505342 binaries.go:44] Found k8s binaries, skipping transfer
	I1025 10:36:51.159423  505342 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1025 10:36:51.168095  505342 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1025 10:36:51.181468  505342 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1025 10:36:51.194626  505342 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1025 10:36:51.207674  505342 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1025 10:36:51.211146  505342 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1025 10:36:51.221111  505342 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:36:51.337465  505342 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:36:51.353266  505342 certs.go:69] Setting up /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614 for IP: 192.168.76.2
	I1025 10:36:51.353338  505342 certs.go:195] generating shared ca certs ...
	I1025 10:36:51.353377  505342 certs.go:227] acquiring lock for ca certs: {Name:mk9438893ca511bbb3aa6154ee7b6f94b409696d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:36:51.353572  505342 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key
	I1025 10:36:51.353673  505342 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key
	I1025 10:36:51.353710  505342 certs.go:257] generating profile certs ...
	I1025 10:36:51.353797  505342 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/client.key
	I1025 10:36:51.353843  505342 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/client.crt with IP's: []
	I1025 10:36:52.249587  505342 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/client.crt ...
	I1025 10:36:52.249620  505342 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/client.crt: {Name:mk68c5657cec1ee108d29f35b574472597d3841a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:36:52.249812  505342 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/client.key ...
	I1025 10:36:52.249826  505342 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/client.key: {Name:mk4db70f76cc1d2d3527c36e08dc1099e2263815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:36:52.249925  505342 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/apiserver.key.72bfe253
	I1025 10:36:52.249943  505342 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/apiserver.crt.72bfe253 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1025 10:36:52.393728  505342 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/apiserver.crt.72bfe253 ...
	I1025 10:36:52.393761  505342 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/apiserver.crt.72bfe253: {Name:mkc6133a807a3da2ea2fc5216272f79a38a7963d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:36:52.393947  505342 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/apiserver.key.72bfe253 ...
	I1025 10:36:52.393961  505342 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/apiserver.key.72bfe253: {Name:mka41dc493d8abb5f80cfdbb2013cbeba4c1c809 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:36:52.394050  505342 certs.go:382] copying /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/apiserver.crt.72bfe253 -> /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/apiserver.crt
	I1025 10:36:52.394146  505342 certs.go:386] copying /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/apiserver.key.72bfe253 -> /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/apiserver.key
	I1025 10:36:52.394207  505342 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/proxy-client.key
	I1025 10:36:52.394225  505342 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/proxy-client.crt with IP's: []
	I1025 10:36:52.507700  505342 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/proxy-client.crt ...
	I1025 10:36:52.507726  505342 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/proxy-client.crt: {Name:mk3924dd4860a4fd48bd155b7b678c9501c3cb10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:36:52.507899  505342 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/proxy-client.key ...
	I1025 10:36:52.507912  505342 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/proxy-client.key: {Name:mkccacbfe9de1053b9a7312651ffe4e577d2fb7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:36:52.508108  505342 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem (1338 bytes)
	W1025 10:36:52.508149  505342 certs.go:480] ignoring /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017_empty.pem, impossibly tiny 0 bytes
	I1025 10:36:52.508165  505342 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca-key.pem (1675 bytes)
	I1025 10:36:52.508191  505342 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/ca.pem (1082 bytes)
	I1025 10:36:52.508219  505342 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/cert.pem (1123 bytes)
	I1025 10:36:52.508245  505342 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/certs/key.pem (1675 bytes)
	I1025 10:36:52.508288  505342 certs.go:484] found cert: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem (1708 bytes)
	I1025 10:36:52.508991  505342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1025 10:36:52.528658  505342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1025 10:36:52.547758  505342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1025 10:36:52.566107  505342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1025 10:36:52.584094  505342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1025 10:36:52.602602  505342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1025 10:36:52.620722  505342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1025 10:36:52.638155  505342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1025 10:36:52.655903  505342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/certs/294017.pem --> /usr/share/ca-certificates/294017.pem (1338 bytes)
	I1025 10:36:52.673927  505342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/ssl/certs/2940172.pem --> /usr/share/ca-certificates/2940172.pem (1708 bytes)
	I1025 10:36:52.691039  505342 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1025 10:36:52.708494  505342 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1025 10:36:52.721521  505342 ssh_runner.go:195] Run: openssl version
	I1025 10:36:52.729636  505342 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2940172.pem && ln -fs /usr/share/ca-certificates/2940172.pem /etc/ssl/certs/2940172.pem"
	I1025 10:36:52.738537  505342 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2940172.pem
	I1025 10:36:52.742275  505342 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 25 09:39 /usr/share/ca-certificates/2940172.pem
	I1025 10:36:52.742338  505342 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2940172.pem
	I1025 10:36:52.790421  505342 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2940172.pem /etc/ssl/certs/3ec20f2e.0"
	I1025 10:36:52.801286  505342 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1025 10:36:52.810465  505342 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:36:52.814691  505342 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 25 09:33 /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:36:52.814806  505342 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1025 10:36:52.859400  505342 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1025 10:36:52.867726  505342 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/294017.pem && ln -fs /usr/share/ca-certificates/294017.pem /etc/ssl/certs/294017.pem"
	I1025 10:36:52.875644  505342 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/294017.pem
	I1025 10:36:52.879566  505342 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 25 09:39 /usr/share/ca-certificates/294017.pem
	I1025 10:36:52.879628  505342 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/294017.pem
	I1025 10:36:52.921166  505342 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/294017.pem /etc/ssl/certs/51391683.0"
	I1025 10:36:52.929216  505342 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1025 10:36:52.932614  505342 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1025 10:36:52.932690  505342 kubeadm.go:400] StartCluster: {Name:auto-821614 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-821614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 10:36:52.932780  505342 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1025 10:36:52.932844  505342 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1025 10:36:52.964764  505342 cri.go:89] found id: ""
	I1025 10:36:52.964895  505342 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1025 10:36:52.972681  505342 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1025 10:36:52.980393  505342 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1025 10:36:52.980486  505342 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1025 10:36:52.988044  505342 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1025 10:36:52.988066  505342 kubeadm.go:157] found existing configuration files:
	
	I1025 10:36:52.988117  505342 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1025 10:36:52.995780  505342 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1025 10:36:52.995841  505342 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1025 10:36:53.004366  505342 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1025 10:36:53.013084  505342 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1025 10:36:53.013211  505342 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1025 10:36:53.021950  505342 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1025 10:36:53.030960  505342 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1025 10:36:53.031072  505342 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1025 10:36:53.038540  505342 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1025 10:36:53.046442  505342 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1025 10:36:53.046509  505342 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1025 10:36:53.053656  505342 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1025 10:36:53.094503  505342 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1025 10:36:53.094792  505342 kubeadm.go:318] [preflight] Running pre-flight checks
	I1025 10:36:53.121798  505342 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1025 10:36:53.121877  505342 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1025 10:36:53.121918  505342 kubeadm.go:318] OS: Linux
	I1025 10:36:53.121970  505342 kubeadm.go:318] CGROUPS_CPU: enabled
	I1025 10:36:53.122025  505342 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1025 10:36:53.122077  505342 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1025 10:36:53.122136  505342 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1025 10:36:53.122191  505342 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1025 10:36:53.122245  505342 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1025 10:36:53.122297  505342 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1025 10:36:53.122350  505342 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1025 10:36:53.122402  505342 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1025 10:36:53.196120  505342 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1025 10:36:53.196745  505342 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1025 10:36:53.196876  505342 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1025 10:36:53.203689  505342 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1025 10:36:50.273370  501769 pod_ready.go:104] pod "coredns-66bc5c9577-xpwdq" is not "Ready", error: <nil>
	W1025 10:36:52.765687  501769 pod_ready.go:104] pod "coredns-66bc5c9577-xpwdq" is not "Ready", error: <nil>
	W1025 10:36:54.767219  501769 pod_ready.go:104] pod "coredns-66bc5c9577-xpwdq" is not "Ready", error: <nil>
	I1025 10:36:53.208694  505342 out.go:252]   - Generating certificates and keys ...
	I1025 10:36:53.208856  505342 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1025 10:36:53.208962  505342 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1025 10:36:53.435968  505342 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1025 10:36:54.090061  505342 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1025 10:36:54.662434  505342 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1025 10:36:55.022645  505342 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1025 10:36:55.788707  505342 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1025 10:36:55.789047  505342 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-821614 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1025 10:36:56.009144  505342 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1025 10:36:56.009545  505342 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-821614 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	W1025 10:36:57.267678  501769 pod_ready.go:104] pod "coredns-66bc5c9577-xpwdq" is not "Ready", error: <nil>
	W1025 10:36:59.776871  501769 pod_ready.go:104] pod "coredns-66bc5c9577-xpwdq" is not "Ready", error: <nil>
	I1025 10:36:56.206455  505342 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1025 10:36:57.785951  505342 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1025 10:36:58.565167  505342 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1025 10:36:58.565516  505342 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1025 10:36:58.840062  505342 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1025 10:36:59.135397  505342 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1025 10:36:59.474095  505342 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1025 10:37:00.505163  505342 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1025 10:37:01.179011  505342 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1025 10:37:01.179616  505342 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1025 10:37:01.183877  505342 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1025 10:37:02.266487  501769 pod_ready.go:104] pod "coredns-66bc5c9577-xpwdq" is not "Ready", error: <nil>
	W1025 10:37:04.768069  501769 pod_ready.go:104] pod "coredns-66bc5c9577-xpwdq" is not "Ready", error: <nil>
	I1025 10:37:01.187321  505342 out.go:252]   - Booting up control plane ...
	I1025 10:37:01.187445  505342 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1025 10:37:01.187564  505342 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1025 10:37:01.189775  505342 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1025 10:37:01.207662  505342 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1025 10:37:01.208377  505342 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1025 10:37:01.217114  505342 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1025 10:37:01.217449  505342 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1025 10:37:01.217655  505342 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1025 10:37:01.366068  505342 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1025 10:37:01.366196  505342 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1025 10:37:03.367955  505342 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.001089851s
	I1025 10:37:03.373219  505342 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1025 10:37:03.373335  505342 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1025 10:37:03.373633  505342 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1025 10:37:03.373741  505342 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1025 10:37:07.266525  501769 pod_ready.go:104] pod "coredns-66bc5c9577-xpwdq" is not "Ready", error: <nil>
	W1025 10:37:09.766690  501769 pod_ready.go:104] pod "coredns-66bc5c9577-xpwdq" is not "Ready", error: <nil>
	I1025 10:37:06.292001  505342 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.918117751s
	I1025 10:37:08.989188  505342 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.615951352s
	I1025 10:37:10.874851  505342 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.50129031s
	I1025 10:37:10.896813  505342 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1025 10:37:10.916381  505342 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1025 10:37:10.931443  505342 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1025 10:37:10.931661  505342 kubeadm.go:318] [mark-control-plane] Marking the node auto-821614 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1025 10:37:10.951711  505342 kubeadm.go:318] [bootstrap-token] Using token: t2dl2l.8q4tpfrh2bqn7jsk
	I1025 10:37:10.954756  505342 out.go:252]   - Configuring RBAC rules ...
	I1025 10:37:10.954915  505342 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1025 10:37:10.961161  505342 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1025 10:37:10.975989  505342 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1025 10:37:10.982323  505342 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1025 10:37:10.987204  505342 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1025 10:37:10.991658  505342 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1025 10:37:11.285124  505342 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1025 10:37:11.720539  505342 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1025 10:37:12.286334  505342 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1025 10:37:12.287689  505342 kubeadm.go:318] 
	I1025 10:37:12.287764  505342 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1025 10:37:12.287770  505342 kubeadm.go:318] 
	I1025 10:37:12.287851  505342 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1025 10:37:12.287855  505342 kubeadm.go:318] 
	I1025 10:37:12.287882  505342 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1025 10:37:12.287944  505342 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1025 10:37:12.287996  505342 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1025 10:37:12.288001  505342 kubeadm.go:318] 
	I1025 10:37:12.288058  505342 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1025 10:37:12.288063  505342 kubeadm.go:318] 
	I1025 10:37:12.288115  505342 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1025 10:37:12.288119  505342 kubeadm.go:318] 
	I1025 10:37:12.288174  505342 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1025 10:37:12.288252  505342 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1025 10:37:12.288323  505342 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1025 10:37:12.288328  505342 kubeadm.go:318] 
	I1025 10:37:12.288416  505342 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1025 10:37:12.288502  505342 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1025 10:37:12.288507  505342 kubeadm.go:318] 
	I1025 10:37:12.288595  505342 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token t2dl2l.8q4tpfrh2bqn7jsk \
	I1025 10:37:12.288725  505342 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3fbde57ac93e639221cdfe118779e4c6254e9915ac4b07aafbe95a9f86bce8d0 \
	I1025 10:37:12.288749  505342 kubeadm.go:318] 	--control-plane 
	I1025 10:37:12.288755  505342 kubeadm.go:318] 
	I1025 10:37:12.288844  505342 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1025 10:37:12.288849  505342 kubeadm.go:318] 
	I1025 10:37:12.288936  505342 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token t2dl2l.8q4tpfrh2bqn7jsk \
	I1025 10:37:12.289042  505342 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3fbde57ac93e639221cdfe118779e4c6254e9915ac4b07aafbe95a9f86bce8d0 
	I1025 10:37:12.293769  505342 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1025 10:37:12.293995  505342 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1025 10:37:12.294101  505342 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1025 10:37:12.294123  505342 cni.go:84] Creating CNI manager for ""
	I1025 10:37:12.294130  505342 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 10:37:12.297336  505342 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1025 10:37:12.265351  501769 pod_ready.go:104] pod "coredns-66bc5c9577-xpwdq" is not "Ready", error: <nil>
	W1025 10:37:14.767095  501769 pod_ready.go:104] pod "coredns-66bc5c9577-xpwdq" is not "Ready", error: <nil>
	I1025 10:37:12.300350  505342 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1025 10:37:12.305127  505342 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1025 10:37:12.305150  505342 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1025 10:37:12.319085  505342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1025 10:37:13.070341  505342 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1025 10:37:13.070426  505342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:37:13.070483  505342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-821614 minikube.k8s.io/updated_at=2025_10_25T10_37_13_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53 minikube.k8s.io/name=auto-821614 minikube.k8s.io/primary=true
	I1025 10:37:13.235592  505342 ops.go:34] apiserver oom_adj: -16
	I1025 10:37:13.235601  505342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:37:13.735767  505342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:37:14.235919  505342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:37:14.735673  505342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:37:15.235643  505342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:37:15.736402  505342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:37:16.236024  505342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:37:16.735706  505342 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1025 10:37:16.885807  505342 kubeadm.go:1113] duration metric: took 3.815439627s to wait for elevateKubeSystemPrivileges
	I1025 10:37:16.885834  505342 kubeadm.go:402] duration metric: took 23.953147674s to StartCluster
	I1025 10:37:16.885850  505342 settings.go:142] acquiring lock: {Name:mkea67a33832f2ea491a4c745ccd836174c61655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:37:16.885910  505342 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:37:16.886906  505342 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/kubeconfig: {Name:mk3dba620aff9c31a3afd43d9db4d9ff5be75367 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 10:37:16.887092  505342 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1025 10:37:16.887383  505342 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1025 10:37:16.887622  505342 config.go:182] Loaded profile config "auto-821614": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:37:16.887658  505342 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1025 10:37:16.887716  505342 addons.go:69] Setting storage-provisioner=true in profile "auto-821614"
	I1025 10:37:16.887730  505342 addons.go:238] Setting addon storage-provisioner=true in "auto-821614"
	I1025 10:37:16.887751  505342 host.go:66] Checking if "auto-821614" exists ...
	I1025 10:37:16.888051  505342 addons.go:69] Setting default-storageclass=true in profile "auto-821614"
	I1025 10:37:16.888066  505342 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-821614"
	I1025 10:37:16.888360  505342 cli_runner.go:164] Run: docker container inspect auto-821614 --format={{.State.Status}}
	I1025 10:37:16.888841  505342 cli_runner.go:164] Run: docker container inspect auto-821614 --format={{.State.Status}}
	I1025 10:37:16.893097  505342 out.go:179] * Verifying Kubernetes components...
	I1025 10:37:16.895970  505342 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1025 10:37:16.928511  505342 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1025 10:37:15.780567  501769 pod_ready.go:94] pod "coredns-66bc5c9577-xpwdq" is "Ready"
	I1025 10:37:15.780605  501769 pod_ready.go:86] duration metric: took 40.520798507s for pod "coredns-66bc5c9577-xpwdq" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:37:15.794474  501769 pod_ready.go:83] waiting for pod "etcd-no-preload-768303" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:37:15.803412  501769 pod_ready.go:94] pod "etcd-no-preload-768303" is "Ready"
	I1025 10:37:15.803438  501769 pod_ready.go:86] duration metric: took 8.938691ms for pod "etcd-no-preload-768303" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:37:15.894941  501769 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-768303" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:37:15.900376  501769 pod_ready.go:94] pod "kube-apiserver-no-preload-768303" is "Ready"
	I1025 10:37:15.900404  501769 pod_ready.go:86] duration metric: took 5.430337ms for pod "kube-apiserver-no-preload-768303" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:37:15.904022  501769 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-768303" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:37:15.963989  501769 pod_ready.go:94] pod "kube-controller-manager-no-preload-768303" is "Ready"
	I1025 10:37:15.964017  501769 pod_ready.go:86] duration metric: took 59.960532ms for pod "kube-controller-manager-no-preload-768303" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:37:16.164262  501769 pod_ready.go:83] waiting for pod "kube-proxy-m9bnn" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:37:16.563482  501769 pod_ready.go:94] pod "kube-proxy-m9bnn" is "Ready"
	I1025 10:37:16.563510  501769 pod_ready.go:86] duration metric: took 399.218681ms for pod "kube-proxy-m9bnn" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:37:16.763785  501769 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-768303" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:37:17.163454  501769 pod_ready.go:94] pod "kube-scheduler-no-preload-768303" is "Ready"
	I1025 10:37:17.163485  501769 pod_ready.go:86] duration metric: took 399.668582ms for pod "kube-scheduler-no-preload-768303" in "kube-system" namespace to be "Ready" or be gone ...
	I1025 10:37:17.163498  501769 pod_ready.go:40] duration metric: took 41.912196728s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1025 10:37:17.274423  501769 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1025 10:37:17.277838  501769 out.go:179] * Done! kubectl is now configured to use "no-preload-768303" cluster and "default" namespace by default
	I1025 10:37:16.931427  505342 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:37:16.931449  505342 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1025 10:37:16.931515  505342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-821614
	I1025 10:37:16.934958  505342 addons.go:238] Setting addon default-storageclass=true in "auto-821614"
	I1025 10:37:16.935002  505342 host.go:66] Checking if "auto-821614" exists ...
	I1025 10:37:16.935428  505342 cli_runner.go:164] Run: docker container inspect auto-821614 --format={{.State.Status}}
	I1025 10:37:16.973929  505342 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1025 10:37:16.973951  505342 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1025 10:37:16.974012  505342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-821614
	I1025 10:37:16.979057  505342 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33472 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/auto-821614/id_rsa Username:docker}
	I1025 10:37:17.001570  505342 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33472 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/auto-821614/id_rsa Username:docker}
	I1025 10:37:17.274997  505342 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1025 10:37:17.403362  505342 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1025 10:37:17.403479  505342 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1025 10:37:17.437853  505342 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1025 10:37:18.229191  505342 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1025 10:37:18.231123  505342 node_ready.go:35] waiting up to 15m0s for node "auto-821614" to be "Ready" ...
	I1025 10:37:18.268001  505342 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1025 10:37:18.270568  505342 addons.go:514] duration metric: took 1.382886619s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1025 10:37:18.734321  505342 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-821614" context rescaled to 1 replicas
	W1025 10:37:20.234027  505342 node_ready.go:57] node "auto-821614" has "Ready":"False" status (will retry)
	W1025 10:37:22.234408  505342 node_ready.go:57] node "auto-821614" has "Ready":"False" status (will retry)
	W1025 10:37:24.234660  505342 node_ready.go:57] node "auto-821614" has "Ready":"False" status (will retry)
	W1025 10:37:26.735050  505342 node_ready.go:57] node "auto-821614" has "Ready":"False" status (will retry)
	W1025 10:37:29.236498  505342 node_ready.go:57] node "auto-821614" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 25 10:37:14 no-preload-768303 crio[650]: time="2025-10-25T10:37:14.444670486Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:37:14 no-preload-768303 crio[650]: time="2025-10-25T10:37:14.447867885Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:37:14 no-preload-768303 crio[650]: time="2025-10-25T10:37:14.447900542Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:37:14 no-preload-768303 crio[650]: time="2025-10-25T10:37:14.447922631Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:37:14 no-preload-768303 crio[650]: time="2025-10-25T10:37:14.450879288Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:37:14 no-preload-768303 crio[650]: time="2025-10-25T10:37:14.450912027Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:37:14 no-preload-768303 crio[650]: time="2025-10-25T10:37:14.450934304Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:37:14 no-preload-768303 crio[650]: time="2025-10-25T10:37:14.453915709Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:37:14 no-preload-768303 crio[650]: time="2025-10-25T10:37:14.453945354Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:37:14 no-preload-768303 crio[650]: time="2025-10-25T10:37:14.453966877Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 25 10:37:14 no-preload-768303 crio[650]: time="2025-10-25T10:37:14.456976671Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 25 10:37:14 no-preload-768303 crio[650]: time="2025-10-25T10:37:14.457006883Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 25 10:37:22 no-preload-768303 crio[650]: time="2025-10-25T10:37:22.576529668Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1f80d718-525a-4fb4-83e0-58b7abcb747b name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:37:22 no-preload-768303 crio[650]: time="2025-10-25T10:37:22.578174198Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=58c6e520-74d7-4504-a1ec-33414f869bf0 name=/runtime.v1.ImageService/ImageStatus
	Oct 25 10:37:22 no-preload-768303 crio[650]: time="2025-10-25T10:37:22.57937187Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nrs74/dashboard-metrics-scraper" id=048b1a1a-3ee5-4db7-b5c2-873f3162c527 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:37:22 no-preload-768303 crio[650]: time="2025-10-25T10:37:22.579499388Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:37:22 no-preload-768303 crio[650]: time="2025-10-25T10:37:22.588307615Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:37:22 no-preload-768303 crio[650]: time="2025-10-25T10:37:22.588920523Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 25 10:37:22 no-preload-768303 crio[650]: time="2025-10-25T10:37:22.612547172Z" level=info msg="Created container 768de17eae2727dc2b38c910a646fc5e11509ff6fe18771502473f27c5a37897: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nrs74/dashboard-metrics-scraper" id=048b1a1a-3ee5-4db7-b5c2-873f3162c527 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 25 10:37:22 no-preload-768303 crio[650]: time="2025-10-25T10:37:22.614652657Z" level=info msg="Starting container: 768de17eae2727dc2b38c910a646fc5e11509ff6fe18771502473f27c5a37897" id=73a44d48-0b80-414d-8983-c77d21cdcb44 name=/runtime.v1.RuntimeService/StartContainer
	Oct 25 10:37:22 no-preload-768303 crio[650]: time="2025-10-25T10:37:22.617554921Z" level=info msg="Started container" PID=1730 containerID=768de17eae2727dc2b38c910a646fc5e11509ff6fe18771502473f27c5a37897 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nrs74/dashboard-metrics-scraper id=73a44d48-0b80-414d-8983-c77d21cdcb44 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8e61236222d1e60038ddbcd5b8358d3adf6764607fb33f8516701bd43c5b117f
	Oct 25 10:37:22 no-preload-768303 conmon[1728]: conmon 768de17eae2727dc2b38 <ninfo>: container 1730 exited with status 1
	Oct 25 10:37:22 no-preload-768303 crio[650]: time="2025-10-25T10:37:22.88135235Z" level=info msg="Removing container: 8b549cf2eee3d5046a9116d8c855221b6dc83ada103c3edf74198b275792b999" id=2b98fee2-de9c-485c-865e-94d8e1699143 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:37:22 no-preload-768303 crio[650]: time="2025-10-25T10:37:22.892849596Z" level=info msg="Error loading conmon cgroup of container 8b549cf2eee3d5046a9116d8c855221b6dc83ada103c3edf74198b275792b999: cgroup deleted" id=2b98fee2-de9c-485c-865e-94d8e1699143 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 25 10:37:22 no-preload-768303 crio[650]: time="2025-10-25T10:37:22.901601789Z" level=info msg="Removed container 8b549cf2eee3d5046a9116d8c855221b6dc83ada103c3edf74198b275792b999: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nrs74/dashboard-metrics-scraper" id=2b98fee2-de9c-485c-865e-94d8e1699143 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	768de17eae272       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago       Exited              dashboard-metrics-scraper   3                   8e61236222d1e       dashboard-metrics-scraper-6ffb444bf9-nrs74   kubernetes-dashboard
	62a15f1c7868d       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           29 seconds ago       Running             storage-provisioner         2                   703b97ffefd79       storage-provisioner                          kube-system
	9732113c248eb       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   44 seconds ago       Running             kubernetes-dashboard        0                   6b560b20454bd       kubernetes-dashboard-855c9754f9-mk9wc        kubernetes-dashboard
	73f8b7df780f0       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   47edeaf4945aa       busybox                                      default
	44fb97e92f81b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   b92c859be13db       coredns-66bc5c9577-xpwdq                     kube-system
	c8f46af3f17bd       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   57128b7046b95       kindnet-gkbg7                                kube-system
	0492235313c1a       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           About a minute ago   Exited              storage-provisioner         1                   703b97ffefd79       storage-provisioner                          kube-system
	403792b3f1ed4       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   b4065a08ddd60       kube-proxy-m9bnn                             kube-system
	c59a4eacffb62       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   dcc605674f701       kube-apiserver-no-preload-768303             kube-system
	c1fa525274c96       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   a110e08986d42       kube-scheduler-no-preload-768303             kube-system
	82f4a3c724831       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   62f33adbf551a       kube-controller-manager-no-preload-768303    kube-system
	29ccba364a872       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   2e799b1c5c6d7       etcd-no-preload-768303                       kube-system
	
	
	==> coredns [44fb97e92f81b6f58a2866e13945a4e276c3468dc6734864d6817b7fb99282a5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36790 - 11588 "HINFO IN 3328332253302026847.3606455303663770082. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030873242s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               no-preload-768303
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-768303
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e2f54f7d7a45b8c9088c0a429fcc1f5efbb9bd53
	                    minikube.k8s.io/name=no-preload-768303
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_25T10_35_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 25 Oct 2025 10:35:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-768303
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 25 Oct 2025 10:37:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 25 Oct 2025 10:37:03 +0000   Sat, 25 Oct 2025 10:35:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 25 Oct 2025 10:37:03 +0000   Sat, 25 Oct 2025 10:35:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 25 Oct 2025 10:37:03 +0000   Sat, 25 Oct 2025 10:35:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 25 Oct 2025 10:37:03 +0000   Sat, 25 Oct 2025 10:35:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-768303
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                02b80f62-aa20-40d0-81a6-fccd316d79be
	  Boot ID:                    3729fb0b-b441-4078-a4f7-ae0fb40e9fa4
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 coredns-66bc5c9577-xpwdq                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m4s
	  kube-system                 etcd-no-preload-768303                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m9s
	  kube-system                 kindnet-gkbg7                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m4s
	  kube-system                 kube-apiserver-no-preload-768303              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-controller-manager-no-preload-768303     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 kube-proxy-m9bnn                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-scheduler-no-preload-768303              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-nrs74    0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-mk9wc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m3s                   kube-proxy       
	  Normal   Starting                 58s                    kube-proxy       
	  Normal   Starting                 2m19s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m19s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m19s (x8 over 2m19s)  kubelet          Node no-preload-768303 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m19s (x8 over 2m19s)  kubelet          Node no-preload-768303 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m19s (x8 over 2m19s)  kubelet          Node no-preload-768303 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m10s                  kubelet          Node no-preload-768303 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m10s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m10s                  kubelet          Node no-preload-768303 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m10s                  kubelet          Node no-preload-768303 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m10s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m5s                   node-controller  Node no-preload-768303 event: Registered Node no-preload-768303 in Controller
	  Normal   NodeReady                108s                   kubelet          Node no-preload-768303 status is now: NodeReady
	  Normal   Starting                 70s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 70s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  70s (x8 over 70s)      kubelet          Node no-preload-768303 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    70s (x8 over 70s)      kubelet          Node no-preload-768303 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     70s (x8 over 70s)      kubelet          Node no-preload-768303 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s                    node-controller  Node no-preload-768303 event: Registered Node no-preload-768303 in Controller
	
	
	==> dmesg <==
	[Oct25 10:16] overlayfs: idmapped layers are currently not supported
	[ +24.917476] overlayfs: idmapped layers are currently not supported
	[Oct25 10:17] overlayfs: idmapped layers are currently not supported
	[ +27.290615] overlayfs: idmapped layers are currently not supported
	[Oct25 10:19] overlayfs: idmapped layers are currently not supported
	[Oct25 10:20] overlayfs: idmapped layers are currently not supported
	[Oct25 10:21] overlayfs: idmapped layers are currently not supported
	[Oct25 10:22] overlayfs: idmapped layers are currently not supported
	[ +31.000692] overlayfs: idmapped layers are currently not supported
	[Oct25 10:24] overlayfs: idmapped layers are currently not supported
	[Oct25 10:26] overlayfs: idmapped layers are currently not supported
	[Oct25 10:27] overlayfs: idmapped layers are currently not supported
	[Oct25 10:28] overlayfs: idmapped layers are currently not supported
	[ +15.077774] overlayfs: idmapped layers are currently not supported
	[Oct25 10:29] overlayfs: idmapped layers are currently not supported
	[Oct25 10:30] overlayfs: idmapped layers are currently not supported
	[Oct25 10:32] overlayfs: idmapped layers are currently not supported
	[ +37.840172] overlayfs: idmapped layers are currently not supported
	[Oct25 10:33] overlayfs: idmapped layers are currently not supported
	[Oct25 10:34] overlayfs: idmapped layers are currently not supported
	[ +38.146630] overlayfs: idmapped layers are currently not supported
	[Oct25 10:35] overlayfs: idmapped layers are currently not supported
	[Oct25 10:36] overlayfs: idmapped layers are currently not supported
	[  +9.574283] overlayfs: idmapped layers are currently not supported
	[Oct25 10:37] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [29ccba364a87269d3610c2a46d268a0cb3c524dfed251a515f0693f9f94692a9] <==
	{"level":"warn","ts":"2025-10-25T10:36:29.408085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:29.459423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:29.527621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:29.575501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:29.629466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:29.668407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:29.732310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:29.801196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:29.852832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:29.915450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:29.983796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:30.024249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:30.057141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:30.089536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:30.137015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:30.168111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:30.197782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:30.237911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:30.273532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:30.289497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:30.337840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:30.366365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:30.429592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:30.485249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-25T10:36:30.641356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37182","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:37:34 up  2:20,  0 user,  load average: 4.59, 4.12, 3.43
	Linux no-preload-768303 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c8f46af3f17bdb7311a5124e4ee22cdc269f9aca8899d31cda046d5330eb7dd0] <==
	I1025 10:36:34.224163       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1025 10:36:34.224596       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1025 10:36:34.224738       1 main.go:148] setting mtu 1500 for CNI 
	I1025 10:36:34.224750       1 main.go:178] kindnetd IP family: "ipv4"
	I1025 10:36:34.224760       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-25T10:36:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1025 10:36:34.431088       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1025 10:36:34.431174       1 controller.go:381] "Waiting for informer caches to sync"
	I1025 10:36:34.431373       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1025 10:36:34.432166       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1025 10:37:04.431293       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1025 10:37:04.432532       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1025 10:37:04.432642       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1025 10:37:04.432727       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1025 10:37:05.731744       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1025 10:37:05.731839       1 metrics.go:72] Registering metrics
	I1025 10:37:05.731914       1 controller.go:711] "Syncing nftables rules"
	I1025 10:37:14.435778       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:37:14.435833       1 main.go:301] handling current node
	I1025 10:37:24.430976       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:37:24.431006       1 main.go:301] handling current node
	I1025 10:37:34.436031       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1025 10:37:34.436062       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c59a4eacffb6261c64ec4a0dd5309bb830374bf2bc988c69c85ef5c9dce0ad2f] <==
	I1025 10:36:32.446987       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1025 10:36:32.455384       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1025 10:36:32.455468       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1025 10:36:32.466375       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1025 10:36:32.466709       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1025 10:36:32.466739       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1025 10:36:32.466773       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1025 10:36:32.468893       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1025 10:36:32.497543       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1025 10:36:32.500176       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1025 10:36:32.501314       1 aggregator.go:171] initial CRD sync complete...
	I1025 10:36:32.501338       1 autoregister_controller.go:144] Starting autoregister controller
	I1025 10:36:32.501347       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1025 10:36:32.501355       1 cache.go:39] Caches are synced for autoregister controller
	I1025 10:36:32.725075       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1025 10:36:33.448823       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1025 10:36:33.602580       1 controller.go:667] quota admission added evaluator for: namespaces
	I1025 10:36:34.221489       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1025 10:36:34.515870       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1025 10:36:34.634351       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1025 10:36:34.947109       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.93.15"}
	I1025 10:36:35.012166       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.152.173"}
	I1025 10:36:36.955384       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1025 10:36:37.282466       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1025 10:36:37.331639       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [82f4a3c724831362a1c08516c0aef7bd256ae88928ed97fce85546159dfb6d88] <==
	I1025 10:36:36.956568       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1025 10:36:36.956698       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1025 10:36:36.962557       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1025 10:36:36.966782       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1025 10:36:36.967111       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1025 10:36:36.967781       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-768303"
	I1025 10:36:36.967922       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1025 10:36:36.969609       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:36:36.975048       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1025 10:36:36.975356       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1025 10:36:36.975623       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1025 10:36:36.977428       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1025 10:36:36.977514       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1025 10:36:36.977532       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1025 10:36:36.977938       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1025 10:36:36.980254       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1025 10:36:36.980386       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1025 10:36:36.984209       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1025 10:36:36.989059       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1025 10:36:36.994725       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1025 10:36:36.998031       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1025 10:36:36.999234       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1025 10:36:37.004322       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1025 10:36:37.004488       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1025 10:36:37.004523       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [403792b3f1ed46564bd4347a8a8647977de7599f4e850acc81992dbd9bc4e22b] <==
	I1025 10:36:35.472033       1 server_linux.go:53] "Using iptables proxy"
	I1025 10:36:35.807990       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1025 10:36:35.909038       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1025 10:36:35.909147       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1025 10:36:35.909266       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1025 10:36:35.934522       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1025 10:36:35.934635       1 server_linux.go:132] "Using iptables Proxier"
	I1025 10:36:35.940784       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1025 10:36:35.941173       1 server.go:527] "Version info" version="v1.34.1"
	I1025 10:36:35.941370       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:36:35.942592       1 config.go:200] "Starting service config controller"
	I1025 10:36:35.942650       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1025 10:36:35.942693       1 config.go:106] "Starting endpoint slice config controller"
	I1025 10:36:35.942720       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1025 10:36:35.942756       1 config.go:403] "Starting serviceCIDR config controller"
	I1025 10:36:35.942780       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1025 10:36:35.943804       1 config.go:309] "Starting node config controller"
	I1025 10:36:35.944715       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1025 10:36:35.944768       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1025 10:36:36.043668       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1025 10:36:36.043715       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1025 10:36:36.043677       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [c1fa525274c96926fd1852026a7ae2899e382b1b5a53998cb6b5bd410772f848] <==
	I1025 10:36:30.710980       1 serving.go:386] Generated self-signed cert in-memory
	I1025 10:36:35.670064       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1025 10:36:35.670102       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1025 10:36:35.689294       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1025 10:36:35.689388       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1025 10:36:35.689417       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1025 10:36:35.689449       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1025 10:36:35.700277       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:36:35.700425       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:36:35.700473       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:36:35.700505       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1025 10:36:35.789710       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1025 10:36:35.801585       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1025 10:36:35.801735       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 25 10:36:44 no-preload-768303 kubelet[773]: I1025 10:36:44.760703     773 scope.go:117] "RemoveContainer" containerID="98071963c73700d8860d3870556be774087396575a1141aac0ca689a0a18b6cd"
	Oct 25 10:36:45 no-preload-768303 kubelet[773]: I1025 10:36:45.767040     773 scope.go:117] "RemoveContainer" containerID="98071963c73700d8860d3870556be774087396575a1141aac0ca689a0a18b6cd"
	Oct 25 10:36:45 no-preload-768303 kubelet[773]: I1025 10:36:45.767375     773 scope.go:117] "RemoveContainer" containerID="a6b0a744e82dae498b8d2eb60b9729adaa260f09ee9c700c6258ae48d1517932"
	Oct 25 10:36:45 no-preload-768303 kubelet[773]: E1025 10:36:45.767741     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nrs74_kubernetes-dashboard(b23abda7-8857-415a-a8da-7b89d29698f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nrs74" podUID="b23abda7-8857-415a-a8da-7b89d29698f1"
	Oct 25 10:36:46 no-preload-768303 kubelet[773]: I1025 10:36:46.770795     773 scope.go:117] "RemoveContainer" containerID="a6b0a744e82dae498b8d2eb60b9729adaa260f09ee9c700c6258ae48d1517932"
	Oct 25 10:36:46 no-preload-768303 kubelet[773]: E1025 10:36:46.770941     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nrs74_kubernetes-dashboard(b23abda7-8857-415a-a8da-7b89d29698f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nrs74" podUID="b23abda7-8857-415a-a8da-7b89d29698f1"
	Oct 25 10:36:47 no-preload-768303 kubelet[773]: I1025 10:36:47.786952     773 scope.go:117] "RemoveContainer" containerID="a6b0a744e82dae498b8d2eb60b9729adaa260f09ee9c700c6258ae48d1517932"
	Oct 25 10:36:47 no-preload-768303 kubelet[773]: E1025 10:36:47.787515     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nrs74_kubernetes-dashboard(b23abda7-8857-415a-a8da-7b89d29698f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nrs74" podUID="b23abda7-8857-415a-a8da-7b89d29698f1"
	Oct 25 10:36:58 no-preload-768303 kubelet[773]: I1025 10:36:58.577245     773 scope.go:117] "RemoveContainer" containerID="a6b0a744e82dae498b8d2eb60b9729adaa260f09ee9c700c6258ae48d1517932"
	Oct 25 10:36:58 no-preload-768303 kubelet[773]: I1025 10:36:58.815022     773 scope.go:117] "RemoveContainer" containerID="a6b0a744e82dae498b8d2eb60b9729adaa260f09ee9c700c6258ae48d1517932"
	Oct 25 10:36:59 no-preload-768303 kubelet[773]: I1025 10:36:59.819468     773 scope.go:117] "RemoveContainer" containerID="8b549cf2eee3d5046a9116d8c855221b6dc83ada103c3edf74198b275792b999"
	Oct 25 10:36:59 no-preload-768303 kubelet[773]: E1025 10:36:59.820116     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nrs74_kubernetes-dashboard(b23abda7-8857-415a-a8da-7b89d29698f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nrs74" podUID="b23abda7-8857-415a-a8da-7b89d29698f1"
	Oct 25 10:36:59 no-preload-768303 kubelet[773]: I1025 10:36:59.840970     773 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mk9wc" podStartSLOduration=10.983475215 podStartE2EDuration="22.840860823s" podCreationTimestamp="2025-10-25 10:36:37 +0000 UTC" firstStartedPulling="2025-10-25 10:36:37.818198793 +0000 UTC m=+13.694419095" lastFinishedPulling="2025-10-25 10:36:49.675584401 +0000 UTC m=+25.551804703" observedRunningTime="2025-10-25 10:36:49.832047094 +0000 UTC m=+25.708267404" watchObservedRunningTime="2025-10-25 10:36:59.840860823 +0000 UTC m=+35.717081150"
	Oct 25 10:37:04 no-preload-768303 kubelet[773]: I1025 10:37:04.834367     773 scope.go:117] "RemoveContainer" containerID="0492235313c1aebca4f7685caee08085cf57173a0fa4189ed3a0c0b1bb9f3538"
	Oct 25 10:37:07 no-preload-768303 kubelet[773]: I1025 10:37:07.766894     773 scope.go:117] "RemoveContainer" containerID="8b549cf2eee3d5046a9116d8c855221b6dc83ada103c3edf74198b275792b999"
	Oct 25 10:37:07 no-preload-768303 kubelet[773]: E1025 10:37:07.767551     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nrs74_kubernetes-dashboard(b23abda7-8857-415a-a8da-7b89d29698f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nrs74" podUID="b23abda7-8857-415a-a8da-7b89d29698f1"
	Oct 25 10:37:22 no-preload-768303 kubelet[773]: I1025 10:37:22.575753     773 scope.go:117] "RemoveContainer" containerID="8b549cf2eee3d5046a9116d8c855221b6dc83ada103c3edf74198b275792b999"
	Oct 25 10:37:22 no-preload-768303 kubelet[773]: I1025 10:37:22.879407     773 scope.go:117] "RemoveContainer" containerID="8b549cf2eee3d5046a9116d8c855221b6dc83ada103c3edf74198b275792b999"
	Oct 25 10:37:22 no-preload-768303 kubelet[773]: I1025 10:37:22.879681     773 scope.go:117] "RemoveContainer" containerID="768de17eae2727dc2b38c910a646fc5e11509ff6fe18771502473f27c5a37897"
	Oct 25 10:37:22 no-preload-768303 kubelet[773]: E1025 10:37:22.879836     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nrs74_kubernetes-dashboard(b23abda7-8857-415a-a8da-7b89d29698f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nrs74" podUID="b23abda7-8857-415a-a8da-7b89d29698f1"
	Oct 25 10:37:27 no-preload-768303 kubelet[773]: I1025 10:37:27.755022     773 scope.go:117] "RemoveContainer" containerID="768de17eae2727dc2b38c910a646fc5e11509ff6fe18771502473f27c5a37897"
	Oct 25 10:37:27 no-preload-768303 kubelet[773]: E1025 10:37:27.755814     773 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-nrs74_kubernetes-dashboard(b23abda7-8857-415a-a8da-7b89d29698f1)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-nrs74" podUID="b23abda7-8857-415a-a8da-7b89d29698f1"
	Oct 25 10:37:29 no-preload-768303 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 25 10:37:29 no-preload-768303 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 25 10:37:29 no-preload-768303 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [9732113c248ebd098cdf4f6f6e91edb5873b14fea51851da7264013a9aacb532] <==
	2025/10/25 10:36:49 Using namespace: kubernetes-dashboard
	2025/10/25 10:36:49 Using in-cluster config to connect to apiserver
	2025/10/25 10:36:49 Using secret token for csrf signing
	2025/10/25 10:36:49 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/25 10:36:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/25 10:36:49 Successful initial request to the apiserver, version: v1.34.1
	2025/10/25 10:36:49 Generating JWE encryption key
	2025/10/25 10:36:49 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/25 10:36:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/25 10:36:50 Initializing JWE encryption key from synchronized object
	2025/10/25 10:36:50 Creating in-cluster Sidecar client
	2025/10/25 10:36:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:36:50 Serving insecurely on HTTP port: 9090
	2025/10/25 10:37:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/25 10:36:49 Starting overwatch
	
	
	==> storage-provisioner [0492235313c1aebca4f7685caee08085cf57173a0fa4189ed3a0c0b1bb9f3538] <==
	I1025 10:36:34.761969       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1025 10:37:04.768810       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [62a15f1c7868d04631806759f4487bee1b2c75b4a3a11adc84948d3d78dc6a31] <==
	W1025 10:37:04.980483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:37:08.436040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:37:12.697137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:37:16.294992       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:37:19.348954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:37:22.371268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:37:22.376217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 10:37:22.376936       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1025 10:37:22.376993       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"911c92e5-c16f-402a-9e0d-e46ef78d17f2", APIVersion:"v1", ResourceVersion:"680", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-768303_5a9e2cf2-2802-4474-ace9-ecf8a5febe6f became leader
	I1025 10:37:22.377196       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-768303_5a9e2cf2-2802-4474-ace9-ecf8a5febe6f!
	W1025 10:37:22.386127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:37:22.389411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1025 10:37:22.477682       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-768303_5a9e2cf2-2802-4474-ace9-ecf8a5febe6f!
	W1025 10:37:24.392642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:37:24.400112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:37:26.403970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:37:26.408304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:37:28.412018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:37:28.418400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:37:30.421691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:37:30.426952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:37:32.430167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:37:32.440371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:37:34.448678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1025 10:37:34.455800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-768303 -n no-preload-768303
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-768303 -n no-preload-768303: exit status 2 (400.372644ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-768303 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.44s)
E1025 10:43:23.316189  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:43:31.255598  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/default-k8s-diff-port-204074/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:43:33.627102  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (259/326)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.55
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 3.74
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.15
18 TestDownloadOnly/v1.34.1/DeleteAll 0.37
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.22
21 TestBinaryMirror 0.6
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 174.07
31 TestAddons/serial/GCPAuth/Namespaces 0.21
32 TestAddons/serial/GCPAuth/FakeCredentials 9.78
48 TestAddons/StoppedEnableDisable 12.45
49 TestCertOptions 41.85
50 TestCertExpiration 253.28
52 TestForceSystemdFlag 37.41
53 TestForceSystemdEnv 40.54
58 TestErrorSpam/setup 28.75
59 TestErrorSpam/start 0.75
60 TestErrorSpam/status 1.12
61 TestErrorSpam/pause 6
62 TestErrorSpam/unpause 5.84
63 TestErrorSpam/stop 1.49
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 76.73
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 29.63
70 TestFunctional/serial/KubeContext 0.07
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.48
75 TestFunctional/serial/CacheCmd/cache/add_local 1.08
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.82
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 33.96
84 TestFunctional/serial/ComponentHealth 0.09
85 TestFunctional/serial/LogsCmd 1.52
86 TestFunctional/serial/LogsFileCmd 1.47
87 TestFunctional/serial/InvalidService 4.62
89 TestFunctional/parallel/ConfigCmd 0.45
90 TestFunctional/parallel/DashboardCmd 9.53
91 TestFunctional/parallel/DryRun 0.59
92 TestFunctional/parallel/InternationalLanguage 0.29
93 TestFunctional/parallel/StatusCmd 1.33
98 TestFunctional/parallel/AddonsCmd 0.17
99 TestFunctional/parallel/PersistentVolumeClaim 25.95
101 TestFunctional/parallel/SSHCmd 0.74
102 TestFunctional/parallel/CpCmd 2.46
104 TestFunctional/parallel/FileSync 0.37
105 TestFunctional/parallel/CertSync 2.22
109 TestFunctional/parallel/NodeLabels 0.12
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.74
113 TestFunctional/parallel/License 0.34
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.65
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.4
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.13
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
127 TestFunctional/parallel/ProfileCmd/profile_list 0.46
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
129 TestFunctional/parallel/MountCmd/any-port 7.84
130 TestFunctional/parallel/MountCmd/specific-port 1.95
131 TestFunctional/parallel/MountCmd/VerifyCleanup 1.95
132 TestFunctional/parallel/ServiceCmd/List 0.64
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.64
137 TestFunctional/parallel/Version/short 0.08
138 TestFunctional/parallel/Version/components 1.38
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
143 TestFunctional/parallel/ImageCommands/ImageBuild 4.02
144 TestFunctional/parallel/ImageCommands/Setup 0.71
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.7
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 205.29
163 TestMultiControlPlane/serial/DeployApp 8.44
164 TestMultiControlPlane/serial/PingHostFromPods 1.43
165 TestMultiControlPlane/serial/AddWorkerNode 59.99
166 TestMultiControlPlane/serial/NodeLabels 0.12
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.09
168 TestMultiControlPlane/serial/CopyFile 20.04
169 TestMultiControlPlane/serial/StopSecondaryNode 12.91
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.83
171 TestMultiControlPlane/serial/RestartSecondaryNode 32.2
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.2
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 130.33
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.63
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.78
176 TestMultiControlPlane/serial/StopCluster 36.18
177 TestMultiControlPlane/serial/RestartCluster 68.94
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.81
179 TestMultiControlPlane/serial/AddSecondaryNode 49.82
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.07
184 TestJSONOutput/start/Command 81.44
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 5.84
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.24
209 TestKicCustomNetwork/create_custom_network 42.36
210 TestKicCustomNetwork/use_default_bridge_network 40.67
211 TestKicExistingNetwork 35.66
212 TestKicCustomSubnet 36.33
213 TestKicStaticIP 36.59
214 TestMainNoArgs 0.06
215 TestMinikubeProfile 72.87
218 TestMountStart/serial/StartWithMountFirst 6.41
219 TestMountStart/serial/VerifyMountFirst 0.28
220 TestMountStart/serial/StartWithMountSecond 9.38
221 TestMountStart/serial/VerifyMountSecond 0.27
222 TestMountStart/serial/DeleteFirst 1.7
223 TestMountStart/serial/VerifyMountPostDelete 0.27
224 TestMountStart/serial/Stop 1.29
225 TestMountStart/serial/RestartStopped 8.3
226 TestMountStart/serial/VerifyMountPostStop 0.28
229 TestMultiNode/serial/FreshStart2Nodes 139.45
230 TestMultiNode/serial/DeployApp2Nodes 5.48
231 TestMultiNode/serial/PingHostFrom2Pods 0.94
232 TestMultiNode/serial/AddNode 58.26
233 TestMultiNode/serial/MultiNodeLabels 0.16
234 TestMultiNode/serial/ProfileList 0.91
235 TestMultiNode/serial/CopyFile 10.53
236 TestMultiNode/serial/StopNode 2.43
237 TestMultiNode/serial/StartAfterStop 8.44
238 TestMultiNode/serial/RestartKeepsNodes 78.25
239 TestMultiNode/serial/DeleteNode 5.74
240 TestMultiNode/serial/StopMultiNode 24.05
241 TestMultiNode/serial/RestartMultiNode 53.17
242 TestMultiNode/serial/ValidateNameConflict 36.9
247 TestPreload 124.07
252 TestInsufficientStorage 13.86
253 TestRunningBinaryUpgrade 52.32
255 TestKubernetesUpgrade 359.81
256 TestMissingContainerUpgrade 120.28
258 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
259 TestNoKubernetes/serial/StartWithK8s 47.83
260 TestNoKubernetes/serial/StartWithStopK8s 9.23
261 TestNoKubernetes/serial/Start 8.14
262 TestNoKubernetes/serial/VerifyK8sNotRunning 0.48
263 TestNoKubernetes/serial/ProfileList 3.17
264 TestNoKubernetes/serial/Stop 1.32
265 TestNoKubernetes/serial/StartNoArgs 6.54
266 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
267 TestStoppedBinaryUpgrade/Setup 0.75
268 TestStoppedBinaryUpgrade/Upgrade 55.73
269 TestStoppedBinaryUpgrade/MinikubeLogs 1.17
278 TestPause/serial/Start 83.64
279 TestPause/serial/SecondStartNoReconfiguration 27.63
288 TestNetworkPlugins/group/false 5.28
293 TestStartStop/group/old-k8s-version/serial/FirstStart 65.06
294 TestStartStop/group/old-k8s-version/serial/DeployApp 9.41
296 TestStartStop/group/old-k8s-version/serial/Stop 12.01
297 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
298 TestStartStop/group/old-k8s-version/serial/SecondStart 51.89
299 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
300 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
301 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
304 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 88.83
306 TestStartStop/group/embed-certs/serial/FirstStart 83.62
307 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.34
309 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.06
310 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
311 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 50.78
312 TestStartStop/group/embed-certs/serial/DeployApp 9.39
314 TestStartStop/group/embed-certs/serial/Stop 12.58
315 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
316 TestStartStop/group/embed-certs/serial/SecondStart 50.75
317 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
318 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.13
319 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.32
322 TestStartStop/group/no-preload/serial/FirstStart 69.83
323 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
324 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
325 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
328 TestStartStop/group/newest-cni/serial/FirstStart 45.53
329 TestStartStop/group/no-preload/serial/DeployApp 9.4
331 TestStartStop/group/no-preload/serial/Stop 12.17
332 TestStartStop/group/newest-cni/serial/DeployApp 0
334 TestStartStop/group/newest-cni/serial/Stop 1.36
335 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
336 TestStartStop/group/newest-cni/serial/SecondStart 16.77
337 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.28
338 TestStartStop/group/no-preload/serial/SecondStart 62.76
339 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
340 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.42
343 TestNetworkPlugins/group/auto/Start 86
344 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
345 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
346 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
348 TestNetworkPlugins/group/kindnet/Start 84.37
349 TestNetworkPlugins/group/auto/KubeletFlags 0.42
350 TestNetworkPlugins/group/auto/NetCatPod 13.37
351 TestNetworkPlugins/group/auto/DNS 0.17
352 TestNetworkPlugins/group/auto/Localhost 0.15
353 TestNetworkPlugins/group/auto/HairPin 0.15
354 TestNetworkPlugins/group/calico/Start 61.04
355 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
356 TestNetworkPlugins/group/kindnet/KubeletFlags 0.41
357 TestNetworkPlugins/group/kindnet/NetCatPod 12.28
358 TestNetworkPlugins/group/kindnet/DNS 0.24
359 TestNetworkPlugins/group/kindnet/Localhost 0.2
360 TestNetworkPlugins/group/kindnet/HairPin 0.18
361 TestNetworkPlugins/group/calico/ControllerPod 6.01
362 TestNetworkPlugins/group/calico/KubeletFlags 0.37
363 TestNetworkPlugins/group/calico/NetCatPod 12.34
364 TestNetworkPlugins/group/custom-flannel/Start 71.04
365 TestNetworkPlugins/group/calico/DNS 0.24
366 TestNetworkPlugins/group/calico/Localhost 0.16
367 TestNetworkPlugins/group/calico/HairPin 0.15
368 TestNetworkPlugins/group/enable-default-cni/Start 77.86
369 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.39
370 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.34
371 TestNetworkPlugins/group/custom-flannel/DNS 0.15
372 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
373 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
374 TestNetworkPlugins/group/flannel/Start 63.57
375 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.39
376 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.37
377 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
378 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
379 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
380 TestNetworkPlugins/group/bridge/Start 75.36
381 TestNetworkPlugins/group/flannel/ControllerPod 6.01
382 TestNetworkPlugins/group/flannel/KubeletFlags 0.4
383 TestNetworkPlugins/group/flannel/NetCatPod 12.39
384 TestNetworkPlugins/group/flannel/DNS 0.2
385 TestNetworkPlugins/group/flannel/Localhost 0.2
386 TestNetworkPlugins/group/flannel/HairPin 0.19
387 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
388 TestNetworkPlugins/group/bridge/NetCatPod 10.27
389 TestNetworkPlugins/group/bridge/DNS 0.15
390 TestNetworkPlugins/group/bridge/Localhost 0.12
391 TestNetworkPlugins/group/bridge/HairPin 0.17
x
+
TestDownloadOnly/v1.28.0/json-events (5.55s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-147571 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-147571 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.551672068s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.55s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1025 09:32:26.632477  294017 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1025 09:32:26.632555  294017 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-147571
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-147571: exit status 85 (84.893946ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-147571 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-147571 │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:32:21
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:32:21.127063  294022 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:32:21.127338  294022 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:32:21.127373  294022 out.go:374] Setting ErrFile to fd 2...
	I1025 09:32:21.127395  294022 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:32:21.127685  294022 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	W1025 09:32:21.127849  294022 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21794-292167/.minikube/config/config.json: open /home/jenkins/minikube-integration/21794-292167/.minikube/config/config.json: no such file or directory
	I1025 09:32:21.128282  294022 out.go:368] Setting JSON to true
	I1025 09:32:21.129165  294022 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4491,"bootTime":1761380250,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1025 09:32:21.129263  294022 start.go:141] virtualization:  
	I1025 09:32:21.133470  294022 out.go:99] [download-only-147571] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1025 09:32:21.133698  294022 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball: no such file or directory
	I1025 09:32:21.133775  294022 notify.go:220] Checking for updates...
	I1025 09:32:21.136588  294022 out.go:171] MINIKUBE_LOCATION=21794
	I1025 09:32:21.139704  294022 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:32:21.142615  294022 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 09:32:21.145552  294022 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-292167/.minikube
	I1025 09:32:21.148384  294022 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1025 09:32:21.154149  294022 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 09:32:21.154442  294022 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:32:21.177050  294022 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 09:32:21.177171  294022 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:32:21.241345  294022 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-25 09:32:21.231941213 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:32:21.241502  294022 docker.go:318] overlay module found
	I1025 09:32:21.244548  294022 out.go:99] Using the docker driver based on user configuration
	I1025 09:32:21.244591  294022 start.go:305] selected driver: docker
	I1025 09:32:21.244598  294022 start.go:925] validating driver "docker" against <nil>
	I1025 09:32:21.244715  294022 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:32:21.295609  294022 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-25 09:32:21.285965649 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:32:21.295756  294022 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 09:32:21.296050  294022 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1025 09:32:21.296200  294022 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 09:32:21.299259  294022 out.go:171] Using Docker driver with root privileges
	I1025 09:32:21.302217  294022 cni.go:84] Creating CNI manager for ""
	I1025 09:32:21.302294  294022 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1025 09:32:21.302304  294022 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1025 09:32:21.302384  294022 start.go:349] cluster config:
	{Name:download-only-147571 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-147571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:32:21.305340  294022 out.go:99] Starting "download-only-147571" primary control-plane node in "download-only-147571" cluster
	I1025 09:32:21.305371  294022 cache.go:123] Beginning downloading kic base image for docker with crio
	I1025 09:32:21.308281  294022 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1025 09:32:21.308329  294022 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 09:32:21.308491  294022 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1025 09:32:21.323446  294022 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1025 09:32:21.324334  294022 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1025 09:32:21.324435  294022 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1025 09:32:21.369800  294022 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1025 09:32:21.369826  294022 cache.go:58] Caching tarball of preloaded images
	I1025 09:32:21.372742  294022 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 09:32:21.376062  294022 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1025 09:32:21.376084  294022 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1025 09:32:21.466915  294022 preload.go:290] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1025 09:32:21.467036  294022 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1025 09:32:24.380912  294022 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1025 09:32:24.381283  294022 profile.go:143] Saving config to /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/download-only-147571/config.json ...
	I1025 09:32:24.381325  294022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/download-only-147571/config.json: {Name:mkd491d39f655d86f78d00005a5f24b86e2d239b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1025 09:32:24.381511  294022 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1025 09:32:24.381696  294022 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21794-292167/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-147571 host does not exist
	  To start a cluster, run: "minikube start -p download-only-147571"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-147571
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-828998 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-828998 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.734985681s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1025 09:32:30.809117  294017 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1025 09:32:30.809154  294017 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21794-292167/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-828998
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-828998: exit status 85 (151.051574ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-147571 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-147571 │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ delete  │ -p download-only-147571                                                                                                                                                   │ download-only-147571 │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │ 25 Oct 25 09:32 UTC │
	│ start   │ -o=json --download-only -p download-only-828998 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-828998 │ jenkins │ v1.37.0 │ 25 Oct 25 09:32 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/25 09:32:27
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1025 09:32:27.117868  294218 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:32:27.117979  294218 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:32:27.117989  294218 out.go:374] Setting ErrFile to fd 2...
	I1025 09:32:27.117994  294218 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:32:27.118280  294218 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 09:32:27.118670  294218 out.go:368] Setting JSON to true
	I1025 09:32:27.119478  294218 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4497,"bootTime":1761380250,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1025 09:32:27.119543  294218 start.go:141] virtualization:  
	I1025 09:32:27.122820  294218 out.go:99] [download-only-828998] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 09:32:27.123030  294218 notify.go:220] Checking for updates...
	I1025 09:32:27.125929  294218 out.go:171] MINIKUBE_LOCATION=21794
	I1025 09:32:27.128938  294218 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:32:27.131904  294218 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 09:32:27.134826  294218 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-292167/.minikube
	I1025 09:32:27.137606  294218 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1025 09:32:27.143117  294218 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1025 09:32:27.143381  294218 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:32:27.175768  294218 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 09:32:27.175881  294218 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:32:27.236094  294218 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-25 09:32:27.226284206 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:32:27.236209  294218 docker.go:318] overlay module found
	I1025 09:32:27.239233  294218 out.go:99] Using the docker driver based on user configuration
	I1025 09:32:27.239272  294218 start.go:305] selected driver: docker
	I1025 09:32:27.239283  294218 start.go:925] validating driver "docker" against <nil>
	I1025 09:32:27.239403  294218 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:32:27.291330  294218 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-25 09:32:27.281691397 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:32:27.291499  294218 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1025 09:32:27.291796  294218 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1025 09:32:27.291956  294218 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1025 09:32:27.294994  294218 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-828998 host does not exist
	  To start a cluster, run: "minikube start -p download-only-828998"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-828998
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I1025 09:32:32.690913  294017 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-490963 --alsologtostderr --binary-mirror http://127.0.0.1:46207 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-490963" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-490963
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-523976
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-523976: exit status 85 (70.759033ms)

                                                
                                                
-- stdout --
	* Profile "addons-523976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-523976"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-523976
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-523976: exit status 85 (77.22531ms)

                                                
                                                
-- stdout --
	* Profile "addons-523976" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-523976"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (174.07s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-523976 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-523976 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m54.06896212s)
--- PASS: TestAddons/Setup (174.07s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-523976 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-523976 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.78s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-523976 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-523976 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3beb4536-823f-4080-88ea-7802c2cd43b4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3beb4536-823f-4080-88ea-7802c2cd43b4] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003870114s
addons_test.go:694: (dbg) Run:  kubectl --context addons-523976 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-523976 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-523976 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-523976 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.78s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.45s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-523976
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-523976: (12.15121136s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-523976
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-523976
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-523976
--- PASS: TestAddons/StoppedEnableDisable (12.45s)

                                                
                                    
x
+
TestCertOptions (41.85s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-506318 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-506318 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (38.998587672s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-506318 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-506318 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-506318 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-506318" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-506318
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-506318: (2.055263566s)
--- PASS: TestCertOptions (41.85s)

                                                
                                    
x
+
TestCertExpiration (253.28s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-313068 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-313068 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (37.241944809s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-313068 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-313068 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (33.296189859s)
helpers_test.go:175: Cleaning up "cert-expiration-313068" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-313068
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-313068: (2.744796868s)
--- PASS: TestCertExpiration (253.28s)

                                                
                                    
x
+
TestForceSystemdFlag (37.41s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-369331 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-369331 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (34.054335246s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-369331 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-369331" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-369331
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-369331: (3.009143372s)
--- PASS: TestForceSystemdFlag (37.41s)

                                                
                                    
x
+
TestForceSystemdEnv (40.54s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-068963 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-068963 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.656204893s)
helpers_test.go:175: Cleaning up "force-systemd-env-068963" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-068963
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-068963: (2.885897542s)
--- PASS: TestForceSystemdEnv (40.54s)

                                                
                                    
x
+
TestErrorSpam/setup (28.75s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-015425 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-015425 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-015425 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-015425 --driver=docker  --container-runtime=crio: (28.74604637s)
--- PASS: TestErrorSpam/setup (28.75s)

                                                
                                    
x
+
TestErrorSpam/start (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015425 --log_dir /tmp/nospam-015425 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015425 --log_dir /tmp/nospam-015425 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015425 --log_dir /tmp/nospam-015425 start --dry-run
--- PASS: TestErrorSpam/start (0.75s)

                                                
                                    
x
+
TestErrorSpam/status (1.12s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015425 --log_dir /tmp/nospam-015425 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015425 --log_dir /tmp/nospam-015425 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015425 --log_dir /tmp/nospam-015425 status
--- PASS: TestErrorSpam/status (1.12s)

                                                
                                    
x
+
TestErrorSpam/pause (6s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015425 --log_dir /tmp/nospam-015425 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-015425 --log_dir /tmp/nospam-015425 pause: exit status 80 (2.212884697s)

                                                
                                                
-- stdout --
	* Pausing node nospam-015425 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:39:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-015425 --log_dir /tmp/nospam-015425 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015425 --log_dir /tmp/nospam-015425 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-015425 --log_dir /tmp/nospam-015425 pause: exit status 80 (1.715434722s)

                                                
                                                
-- stdout --
	* Pausing node nospam-015425 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:39:32Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-015425 --log_dir /tmp/nospam-015425 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015425 --log_dir /tmp/nospam-015425 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-015425 --log_dir /tmp/nospam-015425 pause: exit status 80 (2.073556366s)

                                                
                                                
-- stdout --
	* Pausing node nospam-015425 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:39:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-015425 --log_dir /tmp/nospam-015425 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.00s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.84s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015425 --log_dir /tmp/nospam-015425 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-015425 --log_dir /tmp/nospam-015425 unpause: exit status 80 (2.236132573s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-015425 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:39:36Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-015425 --log_dir /tmp/nospam-015425 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015425 --log_dir /tmp/nospam-015425 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-015425 --log_dir /tmp/nospam-015425 unpause: exit status 80 (1.947081336s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-015425 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:39:38Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-015425 --log_dir /tmp/nospam-015425 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015425 --log_dir /tmp/nospam-015425 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-015425 --log_dir /tmp/nospam-015425 unpause: exit status 80 (1.654805906s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-015425 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-25T09:39:40Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-015425 --log_dir /tmp/nospam-015425 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.84s)

                                                
                                    
x
+
TestErrorSpam/stop (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015425 --log_dir /tmp/nospam-015425 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-015425 --log_dir /tmp/nospam-015425 stop: (1.309010839s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015425 --log_dir /tmp/nospam-015425 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015425 --log_dir /tmp/nospam-015425 stop
--- PASS: TestErrorSpam/stop (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21794-292167/.minikube/files/etc/test/nested/copy/294017/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (76.73s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-900552 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1025 09:40:28.680717  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:40:28.687217  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:40:28.698679  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:40:28.720210  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:40:28.761661  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:40:28.843244  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:40:29.004788  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:40:29.326483  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:40:29.968522  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:40:31.250206  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:40:33.811583  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:40:38.933628  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:40:49.175678  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-900552 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m16.729098353s)
--- PASS: TestFunctional/serial/StartWithProxy (76.73s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.63s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1025 09:41:03.430758  294017 config.go:182] Loaded profile config "functional-900552": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-900552 --alsologtostderr -v=8
E1025 09:41:09.657876  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-900552 --alsologtostderr -v=8: (29.616822272s)
functional_test.go:678: soft start took 29.627606184s for "functional-900552" cluster.
I1025 09:41:33.047931  294017 config.go:182] Loaded profile config "functional-900552": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (29.63s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-900552 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-900552 cache add registry.k8s.io/pause:3.1: (1.147197842s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-900552 cache add registry.k8s.io/pause:3.3: (1.218524722s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-900552 cache add registry.k8s.io/pause:latest: (1.112800892s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-900552 /tmp/TestFunctionalserialCacheCmdcacheadd_local2921783320/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 cache add minikube-local-cache-test:functional-900552
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 cache delete minikube-local-cache-test:functional-900552
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-900552
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-900552 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (300.298583ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 kubectl -- --context functional-900552 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-900552 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.96s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-900552 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1025 09:41:50.620359  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-900552 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.962841074s)
functional_test.go:776: restart took 33.962930248s for "functional-900552" cluster.
I1025 09:42:14.395483  294017 config.go:182] Loaded profile config "functional-900552": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (33.96s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-900552 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-900552 logs: (1.517337238s)
--- PASS: TestFunctional/serial/LogsCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 logs --file /tmp/TestFunctionalserialLogsFileCmd3616494145/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-900552 logs --file /tmp/TestFunctionalserialLogsFileCmd3616494145/001/logs.txt: (1.471140969s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.47s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.62s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-900552 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-900552
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-900552: exit status 115 (396.44701ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32203 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-900552 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.62s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-900552 config get cpus: exit status 14 (67.119727ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-900552 config get cpus: exit status 14 (76.333287ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-900552 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-900552 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 320537: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.53s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-900552 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-900552 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (232.477103ms)

                                                
                                                
-- stdout --
	* [functional-900552] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21794
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21794-292167/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-292167/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:52:52.030086  319996 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:52:52.030273  319996 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:52:52.030283  319996 out.go:374] Setting ErrFile to fd 2...
	I1025 09:52:52.030297  319996 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:52:52.030586  319996 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 09:52:52.030971  319996 out.go:368] Setting JSON to false
	I1025 09:52:52.031904  319996 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5722,"bootTime":1761380250,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1025 09:52:52.031982  319996 start.go:141] virtualization:  
	I1025 09:52:52.035051  319996 out.go:179] * [functional-900552] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 09:52:52.038870  319996 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 09:52:52.039006  319996 notify.go:220] Checking for updates...
	I1025 09:52:52.044700  319996 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:52:52.047643  319996 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 09:52:52.050540  319996 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-292167/.minikube
	I1025 09:52:52.053388  319996 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 09:52:52.056235  319996 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:52:52.059706  319996 config.go:182] Loaded profile config "functional-900552": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:52:52.060338  319996 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:52:52.092515  319996 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 09:52:52.092642  319996 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:52:52.187391  319996 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 09:52:52.158198066 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:52:52.187510  319996 docker.go:318] overlay module found
	I1025 09:52:52.190553  319996 out.go:179] * Using the docker driver based on existing profile
	I1025 09:52:52.193437  319996 start.go:305] selected driver: docker
	I1025 09:52:52.193461  319996 start.go:925] validating driver "docker" against &{Name:functional-900552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-900552 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:52:52.193566  319996 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:52:52.196851  319996 out.go:203] 
	W1025 09:52:52.199771  319996 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1025 09:52:52.203370  319996 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-900552 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-900552 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-900552 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (289.369651ms)

                                                
                                                
-- stdout --
	* [functional-900552] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21794
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21794-292167/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-292167/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:52:51.786505  319927 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:52:51.786749  319927 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:52:51.786766  319927 out.go:374] Setting ErrFile to fd 2...
	I1025 09:52:51.786772  319927 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:52:51.787674  319927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 09:52:51.788187  319927 out.go:368] Setting JSON to false
	I1025 09:52:51.789229  319927 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5722,"bootTime":1761380250,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1025 09:52:51.789303  319927 start.go:141] virtualization:  
	I1025 09:52:51.792979  319927 out.go:179] * [functional-900552] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1025 09:52:51.796113  319927 notify.go:220] Checking for updates...
	I1025 09:52:51.796724  319927 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 09:52:51.799890  319927 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 09:52:51.802796  319927 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 09:52:51.805792  319927 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-292167/.minikube
	I1025 09:52:51.809667  319927 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 09:52:51.812755  319927 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 09:52:51.818960  319927 config.go:182] Loaded profile config "functional-900552": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:52:51.819687  319927 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 09:52:51.858145  319927 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 09:52:51.858311  319927 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:52:51.956658  319927 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 09:52:51.947694966 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:52:51.956769  319927 docker.go:318] overlay module found
	I1025 09:52:51.959932  319927 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1025 09:52:51.962782  319927 start.go:305] selected driver: docker
	I1025 09:52:51.962805  319927 start.go:925] validating driver "docker" against &{Name:functional-900552 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-900552 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1025 09:52:51.962899  319927 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 09:52:51.966424  319927 out.go:203] 
	W1025 09:52:51.969388  319927 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1025 09:52:51.972249  319927 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [f04b08cb-4093-4245-9f5f-4f696215a979] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005241688s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-900552 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-900552 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-900552 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-900552 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [c90ddabc-a12a-4056-bf78-424f372d6aef] Pending
helpers_test.go:352: "sp-pod" [c90ddabc-a12a-4056-bf78-424f372d6aef] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [c90ddabc-a12a-4056-bf78-424f372d6aef] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003658869s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-900552 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-900552 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-900552 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [194cac80-131f-408c-9bf3-2693486b2039] Pending
helpers_test.go:352: "sp-pod" [194cac80-131f-408c-9bf3-2693486b2039] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [194cac80-131f-408c-9bf3-2693486b2039] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.002807644s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-900552 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.95s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 ssh -n functional-900552 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 cp functional-900552:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3199937329/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 ssh -n functional-900552 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 ssh -n functional-900552 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.46s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/294017/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 ssh "sudo cat /etc/test/nested/copy/294017/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/294017.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 ssh "sudo cat /etc/ssl/certs/294017.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/294017.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 ssh "sudo cat /usr/share/ca-certificates/294017.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2940172.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 ssh "sudo cat /etc/ssl/certs/2940172.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2940172.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 ssh "sudo cat /usr/share/ca-certificates/2940172.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-900552 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-900552 ssh "sudo systemctl is-active docker": exit status 1 (382.514083ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-900552 ssh "sudo systemctl is-active containerd": exit status 1 (352.259329ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-900552 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-900552 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-900552 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 316482: os: process already finished
helpers_test.go:519: unable to terminate pid 316298: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-900552 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-900552 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-900552 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [50017c49-e9f3-400f-8121-793b6e451cd9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [50017c49-e9f3-400f-8121-793b6e451cd9] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003340127s
I1025 09:42:33.340908  294017 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-900552 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.123.27 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-900552 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "406.459609ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "55.429197ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "371.927833ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "54.056405ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-900552 /tmp/TestFunctionalparallelMountCmdany-port3038363532/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761385958613252049" to /tmp/TestFunctionalparallelMountCmdany-port3038363532/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761385958613252049" to /tmp/TestFunctionalparallelMountCmdany-port3038363532/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761385958613252049" to /tmp/TestFunctionalparallelMountCmdany-port3038363532/001/test-1761385958613252049
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-900552 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (356.817728ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1025 09:52:38.970330  294017 retry.go:31] will retry after 339.24945ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 25 09:52 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 25 09:52 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 25 09:52 test-1761385958613252049
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 ssh cat /mount-9p/test-1761385958613252049
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-900552 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [d3ef5116-0414-4194-9f7f-3079d8d0557b] Pending
helpers_test.go:352: "busybox-mount" [d3ef5116-0414-4194-9f7f-3079d8d0557b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [d3ef5116-0414-4194-9f7f-3079d8d0557b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [d3ef5116-0414-4194-9f7f-3079d8d0557b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003312691s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-900552 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-900552 /tmp/TestFunctionalparallelMountCmdany-port3038363532/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.84s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-900552 /tmp/TestFunctionalparallelMountCmdspecific-port1160066702/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-900552 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (339.632274ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1025 09:52:46.791643  294017 retry.go:31] will retry after 579.544175ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-900552 /tmp/TestFunctionalparallelMountCmdspecific-port1160066702/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-900552 ssh "sudo umount -f /mount-9p": exit status 1 (270.6833ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-900552 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-900552 /tmp/TestFunctionalparallelMountCmdspecific-port1160066702/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-900552 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2703707363/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-900552 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2703707363/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-900552 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2703707363/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-900552 ssh "findmnt -T" /mount1: exit status 1 (563.802698ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1025 09:52:48.972817  294017 retry.go:31] will retry after 468.282781ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-900552 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-900552 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2703707363/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-900552 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2703707363/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-900552 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2703707363/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 service list -o json
functional_test.go:1504: Took "643.646399ms" to run "out/minikube-linux-arm64 -p functional-900552 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-900552 version -o=json --components: (1.381156999s)
--- PASS: TestFunctional/parallel/Version/components (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-900552 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-900552 image ls --format short --alsologtostderr:
I1025 09:53:06.195607  322509 out.go:360] Setting OutFile to fd 1 ...
I1025 09:53:06.195822  322509 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:53:06.195852  322509 out.go:374] Setting ErrFile to fd 2...
I1025 09:53:06.195872  322509 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:53:06.196145  322509 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
I1025 09:53:06.196795  322509 config.go:182] Loaded profile config "functional-900552": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:53:06.196965  322509 config.go:182] Loaded profile config "functional-900552": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:53:06.197455  322509 cli_runner.go:164] Run: docker container inspect functional-900552 --format={{.State.Status}}
I1025 09:53:06.215674  322509 ssh_runner.go:195] Run: systemctl --version
I1025 09:53:06.215724  322509 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-900552
I1025 09:53:06.240509  322509 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/functional-900552/id_rsa Username:docker}
I1025 09:53:06.350219  322509 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-900552 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ docker.io/library/nginx                 │ alpine             │ 9c92f55c0336c │ 54.7MB │
│ docker.io/library/nginx                 │ latest             │ e612b97116b41 │ 176MB  │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-900552 image ls --format table --alsologtostderr:
I1025 09:53:06.957284  322740 out.go:360] Setting OutFile to fd 1 ...
I1025 09:53:06.957499  322740 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:53:06.957525  322740 out.go:374] Setting ErrFile to fd 2...
I1025 09:53:06.957544  322740 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:53:06.957851  322740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
I1025 09:53:06.958555  322740 config.go:182] Loaded profile config "functional-900552": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:53:06.958739  322740 config.go:182] Loaded profile config "functional-900552": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:53:06.959293  322740 cli_runner.go:164] Run: docker container inspect functional-900552 --format={{.State.Status}}
I1025 09:53:06.977764  322740 ssh_runner.go:195] Run: systemctl --version
I1025 09:53:06.977928  322740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-900552
I1025 09:53:07.000903  322740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/functional-900552/id_rsa Username:docker}
I1025 09:53:07.111685  322740 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-900552 image ls --format json --alsologtostderr:
[{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad
045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e4
38cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa","repoDigests":["docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0","docker.io/library/nginx@sha256:61e012
87e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54704654"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29
e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"e612b97116b41d24816faa9fd204e1177027648a2cb14bb627dd1eaab1494e8f","repoDigests":["docker.io/library/nginx@sha256:029d4461bd98f124e531380505ceea2072418fdf28752aa73b7b273ba3048903","docker.io/library/nginx@sha256:68e62e210589c349f01d82308b45fbd6fb9b855f8b12cb27e11ad48dbfd0e43f"],"repoTags":["docker.io/library/nginx:latest"],"size":"176071022"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9
f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-900552 image ls --format json --alsologtostderr:
I1025 09:53:06.688393  322676 out.go:360] Setting OutFile to fd 1 ...
I1025 09:53:06.688594  322676 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:53:06.688620  322676 out.go:374] Setting ErrFile to fd 2...
I1025 09:53:06.688641  322676 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:53:06.688968  322676 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
I1025 09:53:06.689604  322676 config.go:182] Loaded profile config "functional-900552": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:53:06.689873  322676 config.go:182] Loaded profile config "functional-900552": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:53:06.690376  322676 cli_runner.go:164] Run: docker container inspect functional-900552 --format={{.State.Status}}
I1025 09:53:06.708672  322676 ssh_runner.go:195] Run: systemctl --version
I1025 09:53:06.708734  322676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-900552
I1025 09:53:06.729220  322676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/functional-900552/id_rsa Username:docker}
I1025 09:53:06.854005  322676 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-900552 image ls --format yaml --alsologtostderr:
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa
repoDigests:
- docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
repoTags:
- docker.io/library/nginx:alpine
size: "54704654"
- id: e612b97116b41d24816faa9fd204e1177027648a2cb14bb627dd1eaab1494e8f
repoDigests:
- docker.io/library/nginx@sha256:029d4461bd98f124e531380505ceea2072418fdf28752aa73b7b273ba3048903
- docker.io/library/nginx@sha256:68e62e210589c349f01d82308b45fbd6fb9b855f8b12cb27e11ad48dbfd0e43f
repoTags:
- docker.io/library/nginx:latest
size: "176071022"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-900552 image ls --format yaml --alsologtostderr:
I1025 09:53:06.400872  322569 out.go:360] Setting OutFile to fd 1 ...
I1025 09:53:06.401029  322569 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:53:06.401036  322569 out.go:374] Setting ErrFile to fd 2...
I1025 09:53:06.401041  322569 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:53:06.401692  322569 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
I1025 09:53:06.402340  322569 config.go:182] Loaded profile config "functional-900552": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:53:06.402470  322569 config.go:182] Loaded profile config "functional-900552": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:53:06.402934  322569 cli_runner.go:164] Run: docker container inspect functional-900552 --format={{.State.Status}}
I1025 09:53:06.433636  322569 ssh_runner.go:195] Run: systemctl --version
I1025 09:53:06.433687  322569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-900552
I1025 09:53:06.461387  322569 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/functional-900552/id_rsa Username:docker}
I1025 09:53:06.571048  322569 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-900552 ssh pgrep buildkitd: exit status 1 (340.046568ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 image build -t localhost/my-image:functional-900552 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-900552 image build -t localhost/my-image:functional-900552 testdata/build --alsologtostderr: (3.444669767s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-900552 image build -t localhost/my-image:functional-900552 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> ee31eedf829
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-900552
--> e7f6804c1b3
Successfully tagged localhost/my-image:functional-900552
e7f6804c1b3b981380b94fcadb9248b264eeb399af29ffefa38c8e7e6a23497e
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-900552 image build -t localhost/my-image:functional-900552 testdata/build --alsologtostderr:
I1025 09:53:06.817096  322703 out.go:360] Setting OutFile to fd 1 ...
I1025 09:53:06.817766  322703 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:53:06.817804  322703 out.go:374] Setting ErrFile to fd 2...
I1025 09:53:06.817895  322703 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1025 09:53:06.818264  322703 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
I1025 09:53:06.819141  322703 config.go:182] Loaded profile config "functional-900552": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:53:06.819931  322703 config.go:182] Loaded profile config "functional-900552": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1025 09:53:06.820466  322703 cli_runner.go:164] Run: docker container inspect functional-900552 --format={{.State.Status}}
I1025 09:53:06.846073  322703 ssh_runner.go:195] Run: systemctl --version
I1025 09:53:06.846148  322703 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-900552
I1025 09:53:06.870250  322703 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33152 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/functional-900552/id_rsa Username:docker}
I1025 09:53:06.986259  322703 build_images.go:161] Building image from path: /tmp/build.3544530598.tar
I1025 09:53:06.986330  322703 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1025 09:53:06.994695  322703 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3544530598.tar
I1025 09:53:07.000136  322703 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3544530598.tar: stat -c "%s %y" /var/lib/minikube/build/build.3544530598.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3544530598.tar': No such file or directory
I1025 09:53:07.000174  322703 ssh_runner.go:362] scp /tmp/build.3544530598.tar --> /var/lib/minikube/build/build.3544530598.tar (3072 bytes)
I1025 09:53:07.022326  322703 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3544530598
I1025 09:53:07.031330  322703 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3544530598 -xf /var/lib/minikube/build/build.3544530598.tar
I1025 09:53:07.040756  322703 crio.go:315] Building image: /var/lib/minikube/build/build.3544530598
I1025 09:53:07.040846  322703 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-900552 /var/lib/minikube/build/build.3544530598 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1025 09:53:10.155670  322703 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-900552 /var/lib/minikube/build/build.3544530598 --cgroup-manager=cgroupfs: (3.114798445s)
I1025 09:53:10.155747  322703 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3544530598
I1025 09:53:10.163847  322703 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3544530598.tar
I1025 09:53:10.172262  322703 build_images.go:217] Built localhost/my-image:functional-900552 from /tmp/build.3544530598.tar
I1025 09:53:10.172295  322703 build_images.go:133] succeeded building to: functional-900552
I1025 09:53:10.172301  322703 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-900552
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 image rm kicbase/echo-server:functional-900552 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-900552 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.70s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-900552
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-900552
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-900552
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (205.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1025 09:55:28.681293  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-992243 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m24.296314124s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (205.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-992243 kubectl -- rollout status deployment/busybox: (5.417483717s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 kubectl -- exec busybox-7b57f96db7-djj4z -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 kubectl -- exec busybox-7b57f96db7-jlvtw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 kubectl -- exec busybox-7b57f96db7-tp4kf -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 kubectl -- exec busybox-7b57f96db7-djj4z -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 kubectl -- exec busybox-7b57f96db7-jlvtw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 kubectl -- exec busybox-7b57f96db7-tp4kf -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 kubectl -- exec busybox-7b57f96db7-djj4z -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 kubectl -- exec busybox-7b57f96db7-jlvtw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 kubectl -- exec busybox-7b57f96db7-tp4kf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 kubectl -- exec busybox-7b57f96db7-djj4z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 kubectl -- exec busybox-7b57f96db7-djj4z -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 kubectl -- exec busybox-7b57f96db7-jlvtw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 kubectl -- exec busybox-7b57f96db7-jlvtw -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 kubectl -- exec busybox-7b57f96db7-tp4kf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 kubectl -- exec busybox-7b57f96db7-tp4kf -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 node add --alsologtostderr -v 5
E1025 09:56:51.745165  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:57:23.943062  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/functional-900552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:57:23.949988  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/functional-900552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:57:23.961462  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/functional-900552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:57:23.983081  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/functional-900552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:57:24.024438  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/functional-900552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:57:24.105878  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/functional-900552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:57:24.267445  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/functional-900552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:57:24.588922  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/functional-900552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:57:25.230519  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/functional-900552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:57:26.512098  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/functional-900552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:57:29.073461  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/functional-900552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:57:34.195596  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/functional-900552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 09:57:44.437744  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/functional-900552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-992243 node add --alsologtostderr -v 5: (58.934037733s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-992243 status --alsologtostderr -v 5: (1.060738702s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-992243 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.084935873s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-992243 status --output json --alsologtostderr -v 5: (1.032706975s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 cp testdata/cp-test.txt ha-992243:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 ssh -n ha-992243 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 cp ha-992243:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3198066688/001/cp-test_ha-992243.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 ssh -n ha-992243 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 cp ha-992243:/home/docker/cp-test.txt ha-992243-m02:/home/docker/cp-test_ha-992243_ha-992243-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 ssh -n ha-992243 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 ssh -n ha-992243-m02 "sudo cat /home/docker/cp-test_ha-992243_ha-992243-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 cp ha-992243:/home/docker/cp-test.txt ha-992243-m03:/home/docker/cp-test_ha-992243_ha-992243-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 ssh -n ha-992243 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 ssh -n ha-992243-m03 "sudo cat /home/docker/cp-test_ha-992243_ha-992243-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 cp ha-992243:/home/docker/cp-test.txt ha-992243-m04:/home/docker/cp-test_ha-992243_ha-992243-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 ssh -n ha-992243 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 ssh -n ha-992243-m04 "sudo cat /home/docker/cp-test_ha-992243_ha-992243-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 cp testdata/cp-test.txt ha-992243-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 ssh -n ha-992243-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 cp ha-992243-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3198066688/001/cp-test_ha-992243-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 ssh -n ha-992243-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 cp ha-992243-m02:/home/docker/cp-test.txt ha-992243:/home/docker/cp-test_ha-992243-m02_ha-992243.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 ssh -n ha-992243-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 ssh -n ha-992243 "sudo cat /home/docker/cp-test_ha-992243-m02_ha-992243.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 cp ha-992243-m02:/home/docker/cp-test.txt ha-992243-m03:/home/docker/cp-test_ha-992243-m02_ha-992243-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 ssh -n ha-992243-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 ssh -n ha-992243-m03 "sudo cat /home/docker/cp-test_ha-992243-m02_ha-992243-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 cp ha-992243-m02:/home/docker/cp-test.txt ha-992243-m04:/home/docker/cp-test_ha-992243-m02_ha-992243-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 ssh -n ha-992243-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 ssh -n ha-992243-m04 "sudo cat /home/docker/cp-test_ha-992243-m02_ha-992243-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 cp testdata/cp-test.txt ha-992243-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 ssh -n ha-992243-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 cp ha-992243-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3198066688/001/cp-test_ha-992243-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 ssh -n ha-992243-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 cp ha-992243-m03:/home/docker/cp-test.txt ha-992243:/home/docker/cp-test_ha-992243-m03_ha-992243.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 ssh -n ha-992243-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 ssh -n ha-992243 "sudo cat /home/docker/cp-test_ha-992243-m03_ha-992243.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 cp ha-992243-m03:/home/docker/cp-test.txt ha-992243-m02:/home/docker/cp-test_ha-992243-m03_ha-992243-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 ssh -n ha-992243-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 ssh -n ha-992243-m02 "sudo cat /home/docker/cp-test_ha-992243-m03_ha-992243-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 cp ha-992243-m03:/home/docker/cp-test.txt ha-992243-m04:/home/docker/cp-test_ha-992243-m03_ha-992243-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 ssh -n ha-992243-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 ssh -n ha-992243-m04 "sudo cat /home/docker/cp-test_ha-992243-m03_ha-992243-m04.txt"
E1025 09:58:04.919464  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/functional-900552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 cp testdata/cp-test.txt ha-992243-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 ssh -n ha-992243-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 cp ha-992243-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3198066688/001/cp-test_ha-992243-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 ssh -n ha-992243-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 cp ha-992243-m04:/home/docker/cp-test.txt ha-992243:/home/docker/cp-test_ha-992243-m04_ha-992243.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 ssh -n ha-992243-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 ssh -n ha-992243 "sudo cat /home/docker/cp-test_ha-992243-m04_ha-992243.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 cp ha-992243-m04:/home/docker/cp-test.txt ha-992243-m02:/home/docker/cp-test_ha-992243-m04_ha-992243-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 ssh -n ha-992243-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 ssh -n ha-992243-m02 "sudo cat /home/docker/cp-test_ha-992243-m04_ha-992243-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 cp ha-992243-m04:/home/docker/cp-test.txt ha-992243-m03:/home/docker/cp-test_ha-992243-m04_ha-992243-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 ssh -n ha-992243-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 ssh -n ha-992243-m03 "sudo cat /home/docker/cp-test_ha-992243-m04_ha-992243-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-992243 node stop m02 --alsologtostderr -v 5: (12.089715401s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-992243 status --alsologtostderr -v 5: exit status 7 (815.374936ms)

                                                
                                                
-- stdout --
	ha-992243
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-992243-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-992243-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-992243-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 09:58:21.773001  337576 out.go:360] Setting OutFile to fd 1 ...
	I1025 09:58:21.773173  337576 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:58:21.773202  337576 out.go:374] Setting ErrFile to fd 2...
	I1025 09:58:21.773226  337576 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 09:58:21.773488  337576 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 09:58:21.773709  337576 out.go:368] Setting JSON to false
	I1025 09:58:21.773772  337576 mustload.go:65] Loading cluster: ha-992243
	I1025 09:58:21.774266  337576 config.go:182] Loaded profile config "ha-992243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 09:58:21.774317  337576 status.go:174] checking status of ha-992243 ...
	I1025 09:58:21.774884  337576 cli_runner.go:164] Run: docker container inspect ha-992243 --format={{.State.Status}}
	I1025 09:58:21.773813  337576 notify.go:220] Checking for updates...
	I1025 09:58:21.803582  337576 status.go:371] ha-992243 host status = "Running" (err=<nil>)
	I1025 09:58:21.803606  337576 host.go:66] Checking if "ha-992243" exists ...
	I1025 09:58:21.804103  337576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-992243
	I1025 09:58:21.834811  337576 host.go:66] Checking if "ha-992243" exists ...
	I1025 09:58:21.835119  337576 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:58:21.835236  337576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-992243
	I1025 09:58:21.855343  337576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33157 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/ha-992243/id_rsa Username:docker}
	I1025 09:58:21.965021  337576 ssh_runner.go:195] Run: systemctl --version
	I1025 09:58:21.973273  337576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:58:21.988672  337576 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 09:58:22.060540  337576 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-25 09:58:22.049314172 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 09:58:22.061170  337576 kubeconfig.go:125] found "ha-992243" server: "https://192.168.49.254:8443"
	I1025 09:58:22.061211  337576 api_server.go:166] Checking apiserver status ...
	I1025 09:58:22.061258  337576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:58:22.074291  337576 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1244/cgroup
	I1025 09:58:22.083490  337576 api_server.go:182] apiserver freezer: "6:freezer:/docker/57772e695cbd92661ed168fea55909c9f0b05c9c5ebeaafd62733a4383914640/crio/crio-b6a04e8c52f77119b7a9357a57784cd6478be1c94ab10147d420433d9a69264a"
	I1025 09:58:22.083577  337576 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/57772e695cbd92661ed168fea55909c9f0b05c9c5ebeaafd62733a4383914640/crio/crio-b6a04e8c52f77119b7a9357a57784cd6478be1c94ab10147d420433d9a69264a/freezer.state
	I1025 09:58:22.093260  337576 api_server.go:204] freezer state: "THAWED"
	I1025 09:58:22.093291  337576 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1025 09:58:22.111871  337576 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1025 09:58:22.111902  337576 status.go:463] ha-992243 apiserver status = Running (err=<nil>)
	I1025 09:58:22.111915  337576 status.go:176] ha-992243 status: &{Name:ha-992243 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:58:22.111932  337576 status.go:174] checking status of ha-992243-m02 ...
	I1025 09:58:22.112322  337576 cli_runner.go:164] Run: docker container inspect ha-992243-m02 --format={{.State.Status}}
	I1025 09:58:22.131467  337576 status.go:371] ha-992243-m02 host status = "Stopped" (err=<nil>)
	I1025 09:58:22.131499  337576 status.go:384] host is not running, skipping remaining checks
	I1025 09:58:22.131506  337576 status.go:176] ha-992243-m02 status: &{Name:ha-992243-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:58:22.131526  337576 status.go:174] checking status of ha-992243-m03 ...
	I1025 09:58:22.131857  337576 cli_runner.go:164] Run: docker container inspect ha-992243-m03 --format={{.State.Status}}
	I1025 09:58:22.154665  337576 status.go:371] ha-992243-m03 host status = "Running" (err=<nil>)
	I1025 09:58:22.154688  337576 host.go:66] Checking if "ha-992243-m03" exists ...
	I1025 09:58:22.155000  337576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-992243-m03
	I1025 09:58:22.175532  337576 host.go:66] Checking if "ha-992243-m03" exists ...
	I1025 09:58:22.175898  337576 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:58:22.175941  337576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-992243-m03
	I1025 09:58:22.196663  337576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/ha-992243-m03/id_rsa Username:docker}
	I1025 09:58:22.301019  337576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:58:22.314486  337576 kubeconfig.go:125] found "ha-992243" server: "https://192.168.49.254:8443"
	I1025 09:58:22.314515  337576 api_server.go:166] Checking apiserver status ...
	I1025 09:58:22.314558  337576 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 09:58:22.327100  337576 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1181/cgroup
	I1025 09:58:22.335865  337576 api_server.go:182] apiserver freezer: "6:freezer:/docker/02224d953869697d7c884dc8fdd0b21fbe4f9d65c135d656ae3d72a5675ef2fc/crio/crio-6b0052a5061542626b412e03822253f8e51f72735a5612f61fc51eeefe0170c1"
	I1025 09:58:22.335932  337576 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/02224d953869697d7c884dc8fdd0b21fbe4f9d65c135d656ae3d72a5675ef2fc/crio/crio-6b0052a5061542626b412e03822253f8e51f72735a5612f61fc51eeefe0170c1/freezer.state
	I1025 09:58:22.352375  337576 api_server.go:204] freezer state: "THAWED"
	I1025 09:58:22.352407  337576 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1025 09:58:22.360568  337576 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1025 09:58:22.360600  337576 status.go:463] ha-992243-m03 apiserver status = Running (err=<nil>)
	I1025 09:58:22.360611  337576 status.go:176] ha-992243-m03 status: &{Name:ha-992243-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 09:58:22.360648  337576 status.go:174] checking status of ha-992243-m04 ...
	I1025 09:58:22.360991  337576 cli_runner.go:164] Run: docker container inspect ha-992243-m04 --format={{.State.Status}}
	I1025 09:58:22.378126  337576 status.go:371] ha-992243-m04 host status = "Running" (err=<nil>)
	I1025 09:58:22.378148  337576 host.go:66] Checking if "ha-992243-m04" exists ...
	I1025 09:58:22.378556  337576 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-992243-m04
	I1025 09:58:22.397495  337576 host.go:66] Checking if "ha-992243-m04" exists ...
	I1025 09:58:22.397860  337576 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 09:58:22.397910  337576 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-992243-m04
	I1025 09:58:22.415477  337576 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33172 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/ha-992243-m04/id_rsa Username:docker}
	I1025 09:58:22.520339  337576 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 09:58:22.533942  337576 status.go:176] ha-992243-m04 status: &{Name:ha-992243-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (32.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 node start m02 --alsologtostderr -v 5
E1025 09:58:45.880732  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/functional-900552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-992243 node start m02 --alsologtostderr -v 5: (30.660178731s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-992243 status --alsologtostderr -v 5: (1.411854563s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (32.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.204003645s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (130.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-992243 stop --alsologtostderr -v 5: (37.698213153s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 start --wait true --alsologtostderr -v 5
E1025 10:00:07.803295  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/functional-900552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:00:28.677097  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-992243 start --wait true --alsologtostderr -v 5: (1m32.460418579s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (130.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-992243 node delete m03 --alsologtostderr -v 5: (10.673400255s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-992243 stop --alsologtostderr -v 5: (36.063721162s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-992243 status --alsologtostderr -v 5: exit status 7 (114.728139ms)

                                                
                                                
-- stdout --
	ha-992243
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-992243-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-992243-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 10:01:55.641496  349615 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:01:55.641693  349615 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:01:55.641720  349615 out.go:374] Setting ErrFile to fd 2...
	I1025 10:01:55.641740  349615 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:01:55.642050  349615 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 10:01:55.642285  349615 out.go:368] Setting JSON to false
	I1025 10:01:55.642347  349615 mustload.go:65] Loading cluster: ha-992243
	I1025 10:01:55.642422  349615 notify.go:220] Checking for updates...
	I1025 10:01:55.643628  349615 config.go:182] Loaded profile config "ha-992243": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:01:55.643658  349615 status.go:174] checking status of ha-992243 ...
	I1025 10:01:55.644298  349615 cli_runner.go:164] Run: docker container inspect ha-992243 --format={{.State.Status}}
	I1025 10:01:55.663528  349615 status.go:371] ha-992243 host status = "Stopped" (err=<nil>)
	I1025 10:01:55.663549  349615 status.go:384] host is not running, skipping remaining checks
	I1025 10:01:55.663556  349615 status.go:176] ha-992243 status: &{Name:ha-992243 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 10:01:55.663586  349615 status.go:174] checking status of ha-992243-m02 ...
	I1025 10:01:55.663884  349615 cli_runner.go:164] Run: docker container inspect ha-992243-m02 --format={{.State.Status}}
	I1025 10:01:55.688697  349615 status.go:371] ha-992243-m02 host status = "Stopped" (err=<nil>)
	I1025 10:01:55.688718  349615 status.go:384] host is not running, skipping remaining checks
	I1025 10:01:55.688735  349615 status.go:176] ha-992243-m02 status: &{Name:ha-992243-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 10:01:55.688755  349615 status.go:174] checking status of ha-992243-m04 ...
	I1025 10:01:55.689058  349615 cli_runner.go:164] Run: docker container inspect ha-992243-m04 --format={{.State.Status}}
	I1025 10:01:55.706555  349615 status.go:371] ha-992243-m04 host status = "Stopped" (err=<nil>)
	I1025 10:01:55.706577  349615 status.go:384] host is not running, skipping remaining checks
	I1025 10:01:55.706584  349615 status.go:176] ha-992243-m04 status: &{Name:ha-992243-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (68.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1025 10:02:23.944336  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/functional-900552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:02:51.645102  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/functional-900552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-992243 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m7.962654635s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (68.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (49.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-992243 node add --control-plane --alsologtostderr -v 5: (48.715503392s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-992243 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-992243 status --alsologtostderr -v 5: (1.099701894s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (49.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.073700941s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.07s)

                                                
                                    
x
+
TestJSONOutput/start/Command (81.44s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-593183 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-593183 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m21.438974955s)
--- PASS: TestJSONOutput/start/Command (81.44s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.84s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-593183 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-593183 --output=json --user=testUser: (5.843924478s)
--- PASS: TestJSONOutput/stop/Command (5.84s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-903358 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-903358 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (99.564744ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5f4b92d0-c960-43d1-8d81-e972e1607ce5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-903358] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2f7c80aa-3e62-4ad4-a534-556b8aa8d757","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21794"}}
	{"specversion":"1.0","id":"eba149c1-a365-41e8-a721-4a825b494e77","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"82757245-c10b-40b6-a038-736b3df2e075","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21794-292167/kubeconfig"}}
	{"specversion":"1.0","id":"dafbd421-9e3b-436a-81fe-060b2702aad6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-292167/.minikube"}}
	{"specversion":"1.0","id":"eb3f4bb6-dd9e-410b-a6b2-64b0d362c2ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"f197a5f5-8578-4a31-a7ff-cdba04723418","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3df3bf01-ec92-446f-baff-922c6814e0aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-903358" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-903358
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (42.36s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-375496 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-375496 --network=: (40.157240177s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-375496" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-375496
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-375496: (2.177842164s)
--- PASS: TestKicCustomNetwork/create_custom_network (42.36s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (40.67s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-877933 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-877933 --network=bridge: (38.53007384s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-877933" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-877933
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-877933: (2.114142795s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (40.67s)

                                                
                                    
x
+
TestKicExistingNetwork (35.66s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1025 10:07:04.207879  294017 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1025 10:07:04.223529  294017 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1025 10:07:04.225012  294017 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1025 10:07:04.225105  294017 cli_runner.go:164] Run: docker network inspect existing-network
W1025 10:07:04.240408  294017 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1025 10:07:04.240440  294017 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1025 10:07:04.240457  294017 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1025 10:07:04.240554  294017 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1025 10:07:04.258086  294017 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-101b69e1e09b IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:d2:49:dd:18:ab:21} reservation:<nil>}
I1025 10:07:04.261477  294017 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I1025 10:07:04.261841  294017 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019475c0}
I1025 10:07:04.262382  294017 network_create.go:124] attempt to create docker network existing-network 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
I1025 10:07:04.262485  294017 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1025 10:07:04.327681  294017 network_create.go:108] docker network existing-network 192.168.67.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-705923 --network=existing-network
E1025 10:07:23.943300  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/functional-900552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-705923 --network=existing-network: (33.442024517s)
helpers_test.go:175: Cleaning up "existing-network-705923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-705923
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-705923: (2.067927864s)
I1025 10:07:39.853970  294017 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (35.66s)

                                                
                                    
x
+
TestKicCustomSubnet (36.33s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-761599 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-761599 --subnet=192.168.60.0/24: (34.156755315s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-761599 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-761599" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-761599
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-761599: (2.140292719s)
--- PASS: TestKicCustomSubnet (36.33s)

                                                
                                    
x
+
TestKicStaticIP (36.59s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-709342 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-709342 --static-ip=192.168.200.200: (34.231899522s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-709342 ip
helpers_test.go:175: Cleaning up "static-ip-709342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-709342
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-709342: (2.206076413s)
--- PASS: TestKicStaticIP (36.59s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (72.87s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-492393 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-492393 --driver=docker  --container-runtime=crio: (34.352686551s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-494846 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-494846 --driver=docker  --container-runtime=crio: (32.051899253s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-492393
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-494846
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-494846" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-494846
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-494846: (2.151414743s)
helpers_test.go:175: Cleaning up "first-492393" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-492393
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-492393: (2.406717199s)
--- PASS: TestMinikubeProfile (72.87s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.41s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-286792 --memory=3072 --mount-string /tmp/TestMountStartserial4028959885/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-286792 --memory=3072 --mount-string /tmp/TestMountStartserial4028959885/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.411634331s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-286792 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.38s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-288753 --memory=3072 --mount-string /tmp/TestMountStartserial4028959885/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-288753 --memory=3072 --mount-string /tmp/TestMountStartserial4028959885/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.376954159s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.38s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-288753 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-286792 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-286792 --alsologtostderr -v=5: (1.703888945s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-288753 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-288753
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-288753: (1.289292012s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.3s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-288753
E1025 10:10:28.676879  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-288753: (7.298856325s)
--- PASS: TestMountStart/serial/RestartStopped (8.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-288753 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (139.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-919215 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1025 10:12:23.943760  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/functional-900552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-919215 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m18.914676313s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (139.45s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-919215 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-919215 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-919215 -- rollout status deployment/busybox: (3.393844487s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-919215 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-919215 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-919215 -- exec busybox-7b57f96db7-bx8qj -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-919215 -- exec busybox-7b57f96db7-zpprx -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-919215 -- exec busybox-7b57f96db7-bx8qj -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-919215 -- exec busybox-7b57f96db7-zpprx -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-919215 -- exec busybox-7b57f96db7-bx8qj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-919215 -- exec busybox-7b57f96db7-zpprx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.48s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-919215 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-919215 -- exec busybox-7b57f96db7-bx8qj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-919215 -- exec busybox-7b57f96db7-bx8qj -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-919215 -- exec busybox-7b57f96db7-zpprx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-919215 -- exec busybox-7b57f96db7-zpprx -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-919215 -v=5 --alsologtostderr
E1025 10:13:31.750122  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:13:47.006878  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/functional-900552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-919215 -v=5 --alsologtostderr: (57.558606455s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.26s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-919215 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.16s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.91s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 cp testdata/cp-test.txt multinode-919215:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 ssh -n multinode-919215 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 cp multinode-919215:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2736369959/001/cp-test_multinode-919215.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 ssh -n multinode-919215 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 cp multinode-919215:/home/docker/cp-test.txt multinode-919215-m02:/home/docker/cp-test_multinode-919215_multinode-919215-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 ssh -n multinode-919215 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 ssh -n multinode-919215-m02 "sudo cat /home/docker/cp-test_multinode-919215_multinode-919215-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 cp multinode-919215:/home/docker/cp-test.txt multinode-919215-m03:/home/docker/cp-test_multinode-919215_multinode-919215-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 ssh -n multinode-919215 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 ssh -n multinode-919215-m03 "sudo cat /home/docker/cp-test_multinode-919215_multinode-919215-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 cp testdata/cp-test.txt multinode-919215-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 ssh -n multinode-919215-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 cp multinode-919215-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2736369959/001/cp-test_multinode-919215-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 ssh -n multinode-919215-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 cp multinode-919215-m02:/home/docker/cp-test.txt multinode-919215:/home/docker/cp-test_multinode-919215-m02_multinode-919215.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 ssh -n multinode-919215-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 ssh -n multinode-919215 "sudo cat /home/docker/cp-test_multinode-919215-m02_multinode-919215.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 cp multinode-919215-m02:/home/docker/cp-test.txt multinode-919215-m03:/home/docker/cp-test_multinode-919215-m02_multinode-919215-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 ssh -n multinode-919215-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 ssh -n multinode-919215-m03 "sudo cat /home/docker/cp-test_multinode-919215-m02_multinode-919215-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 cp testdata/cp-test.txt multinode-919215-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 ssh -n multinode-919215-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 cp multinode-919215-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2736369959/001/cp-test_multinode-919215-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 ssh -n multinode-919215-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 cp multinode-919215-m03:/home/docker/cp-test.txt multinode-919215:/home/docker/cp-test_multinode-919215-m03_multinode-919215.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 ssh -n multinode-919215-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 ssh -n multinode-919215 "sudo cat /home/docker/cp-test_multinode-919215-m03_multinode-919215.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 cp multinode-919215-m03:/home/docker/cp-test.txt multinode-919215-m02:/home/docker/cp-test_multinode-919215-m03_multinode-919215-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 ssh -n multinode-919215-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 ssh -n multinode-919215-m02 "sudo cat /home/docker/cp-test_multinode-919215-m03_multinode-919215-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.53s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-919215 node stop m03: (1.33549592s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-919215 status: exit status 7 (538.824139ms)

                                                
                                                
-- stdout --
	multinode-919215
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-919215-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-919215-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-919215 status --alsologtostderr: exit status 7 (555.517653ms)

                                                
                                                
-- stdout --
	multinode-919215
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-919215-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-919215-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 10:14:13.450439  400048 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:14:13.450669  400048 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:14:13.450702  400048 out.go:374] Setting ErrFile to fd 2...
	I1025 10:14:13.450722  400048 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:14:13.451003  400048 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 10:14:13.451253  400048 out.go:368] Setting JSON to false
	I1025 10:14:13.451316  400048 mustload.go:65] Loading cluster: multinode-919215
	I1025 10:14:13.451402  400048 notify.go:220] Checking for updates...
	I1025 10:14:13.451793  400048 config.go:182] Loaded profile config "multinode-919215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:14:13.451845  400048 status.go:174] checking status of multinode-919215 ...
	I1025 10:14:13.452699  400048 cli_runner.go:164] Run: docker container inspect multinode-919215 --format={{.State.Status}}
	I1025 10:14:13.473915  400048 status.go:371] multinode-919215 host status = "Running" (err=<nil>)
	I1025 10:14:13.473936  400048 host.go:66] Checking if "multinode-919215" exists ...
	I1025 10:14:13.474226  400048 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-919215
	I1025 10:14:13.503358  400048 host.go:66] Checking if "multinode-919215" exists ...
	I1025 10:14:13.504474  400048 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:14:13.504529  400048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-919215
	I1025 10:14:13.522962  400048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33277 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/multinode-919215/id_rsa Username:docker}
	I1025 10:14:13.629214  400048 ssh_runner.go:195] Run: systemctl --version
	I1025 10:14:13.635924  400048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:14:13.649172  400048 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:14:13.709462  400048 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-25 10:14:13.699924648 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:14:13.710086  400048 kubeconfig.go:125] found "multinode-919215" server: "https://192.168.58.2:8443"
	I1025 10:14:13.710130  400048 api_server.go:166] Checking apiserver status ...
	I1025 10:14:13.710176  400048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1025 10:14:13.721933  400048 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1247/cgroup
	I1025 10:14:13.729903  400048 api_server.go:182] apiserver freezer: "6:freezer:/docker/cc921de2f780ea32573febdd5a089b5a7f5366b256715b178a789705b4504073/crio/crio-14fae7fc28600630cc47682337f31dba254247a6137a68db18df508cce70635a"
	I1025 10:14:13.730032  400048 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/cc921de2f780ea32573febdd5a089b5a7f5366b256715b178a789705b4504073/crio/crio-14fae7fc28600630cc47682337f31dba254247a6137a68db18df508cce70635a/freezer.state
	I1025 10:14:13.737504  400048 api_server.go:204] freezer state: "THAWED"
	I1025 10:14:13.737534  400048 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1025 10:14:13.747918  400048 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1025 10:14:13.747951  400048 status.go:463] multinode-919215 apiserver status = Running (err=<nil>)
	I1025 10:14:13.747964  400048 status.go:176] multinode-919215 status: &{Name:multinode-919215 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 10:14:13.747981  400048 status.go:174] checking status of multinode-919215-m02 ...
	I1025 10:14:13.748302  400048 cli_runner.go:164] Run: docker container inspect multinode-919215-m02 --format={{.State.Status}}
	I1025 10:14:13.771656  400048 status.go:371] multinode-919215-m02 host status = "Running" (err=<nil>)
	I1025 10:14:13.771683  400048 host.go:66] Checking if "multinode-919215-m02" exists ...
	I1025 10:14:13.772168  400048 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-919215-m02
	I1025 10:14:13.795674  400048 host.go:66] Checking if "multinode-919215-m02" exists ...
	I1025 10:14:13.796012  400048 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1025 10:14:13.796067  400048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-919215-m02
	I1025 10:14:13.817251  400048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33282 SSHKeyPath:/home/jenkins/minikube-integration/21794-292167/.minikube/machines/multinode-919215-m02/id_rsa Username:docker}
	I1025 10:14:13.924468  400048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1025 10:14:13.937325  400048 status.go:176] multinode-919215-m02 status: &{Name:multinode-919215-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1025 10:14:13.937368  400048 status.go:174] checking status of multinode-919215-m03 ...
	I1025 10:14:13.937761  400048 cli_runner.go:164] Run: docker container inspect multinode-919215-m03 --format={{.State.Status}}
	I1025 10:14:13.959444  400048 status.go:371] multinode-919215-m03 host status = "Stopped" (err=<nil>)
	I1025 10:14:13.959473  400048 status.go:384] host is not running, skipping remaining checks
	I1025 10:14:13.959480  400048 status.go:176] multinode-919215-m03 status: &{Name:multinode-919215-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.43s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-919215 node start m03 -v=5 --alsologtostderr: (7.655969929s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.44s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-919215
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-919215
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-919215: (25.132757617s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-919215 --wait=true -v=5 --alsologtostderr
E1025 10:15:28.677888  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-919215 --wait=true -v=5 --alsologtostderr: (52.982964013s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-919215
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.25s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-919215 node delete m03: (5.062823429s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.74s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-919215 stop: (23.856300963s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-919215 status: exit status 7 (106.130639ms)

                                                
                                                
-- stdout --
	multinode-919215
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-919215-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-919215 status --alsologtostderr: exit status 7 (92.042419ms)

                                                
                                                
-- stdout --
	multinode-919215
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-919215-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 10:16:10.414307  407802 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:16:10.414431  407802 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:16:10.414441  407802 out.go:374] Setting ErrFile to fd 2...
	I1025 10:16:10.414446  407802 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:16:10.414684  407802 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 10:16:10.414900  407802 out.go:368] Setting JSON to false
	I1025 10:16:10.414932  407802 mustload.go:65] Loading cluster: multinode-919215
	I1025 10:16:10.415025  407802 notify.go:220] Checking for updates...
	I1025 10:16:10.415363  407802 config.go:182] Loaded profile config "multinode-919215": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:16:10.415381  407802 status.go:174] checking status of multinode-919215 ...
	I1025 10:16:10.415895  407802 cli_runner.go:164] Run: docker container inspect multinode-919215 --format={{.State.Status}}
	I1025 10:16:10.434881  407802 status.go:371] multinode-919215 host status = "Stopped" (err=<nil>)
	I1025 10:16:10.434907  407802 status.go:384] host is not running, skipping remaining checks
	I1025 10:16:10.434914  407802 status.go:176] multinode-919215 status: &{Name:multinode-919215 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1025 10:16:10.434939  407802 status.go:174] checking status of multinode-919215-m02 ...
	I1025 10:16:10.435308  407802 cli_runner.go:164] Run: docker container inspect multinode-919215-m02 --format={{.State.Status}}
	I1025 10:16:10.456645  407802 status.go:371] multinode-919215-m02 host status = "Stopped" (err=<nil>)
	I1025 10:16:10.456670  407802 status.go:384] host is not running, skipping remaining checks
	I1025 10:16:10.456688  407802 status.go:176] multinode-919215-m02 status: &{Name:multinode-919215-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.05s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-919215 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-919215 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (52.455005201s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-919215 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.17s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-919215
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-919215-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-919215-m02 --driver=docker  --container-runtime=crio: exit status 14 (94.158316ms)

                                                
                                                
-- stdout --
	* [multinode-919215-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21794
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21794-292167/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-292167/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-919215-m02' is duplicated with machine name 'multinode-919215-m02' in profile 'multinode-919215'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-919215-m03 --driver=docker  --container-runtime=crio
E1025 10:17:23.943295  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/functional-900552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-919215-m03 --driver=docker  --container-runtime=crio: (34.256163027s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-919215
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-919215: exit status 80 (396.900319ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-919215 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-919215-m03 already exists in multinode-919215-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-919215-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-919215-m03: (2.095136477s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.90s)

                                                
                                    
x
+
TestPreload (124.07s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-837636 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-837636 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m3.841162282s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-837636 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-837636 image pull gcr.io/k8s-minikube/busybox: (2.184125854s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-837636
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-837636: (5.906572824s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-837636 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-837636 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (49.452064607s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-837636 image list
helpers_test.go:175: Cleaning up "test-preload-837636" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-837636
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-837636: (2.439180245s)
--- PASS: TestPreload (124.07s)

                                                
                                    
x
+
TestInsufficientStorage (13.86s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-771846 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-771846 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (11.286579666s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2e64676a-8339-4b7b-ba54-fb703dc1cb73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-771846] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"fedcc5da-98a7-430d-9379-7d814f13100b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21794"}}
	{"specversion":"1.0","id":"f865bf14-ae44-4164-83c9-2bf7b216930e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"310de250-a4b6-472b-ad5a-c86e7857d590","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21794-292167/kubeconfig"}}
	{"specversion":"1.0","id":"4565024d-c581-4826-ad03-2f514edb199c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-292167/.minikube"}}
	{"specversion":"1.0","id":"c15b05f6-7af3-4f07-b92d-0520d8a423c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"a4ddeb03-57da-48e2-8880-9607618d268e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1f11a60b-145b-48f0-8b3c-1a69eff05ab0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"5631bf9e-79ea-4224-8994-58272e78fdea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"b1dd9716-3d4f-4c48-99c2-6f854a5930cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a8752196-bfb2-4ed4-a5bb-010f6749bb17","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"06d736dd-bff5-48c0-a5b5-9cd7b0d026b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-771846\" primary control-plane node in \"insufficient-storage-771846\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"26aa9664-38f2-49c5-97f6-bc1310e42c56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760939008-21773 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"f2fc856f-18f9-4c4c-9af3-81a8bb40b819","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"86c39e31-49e0-4cb5-88dd-d2fc84e0167d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-771846 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-771846 --output=json --layout=cluster: exit status 7 (298.557627ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-771846","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-771846","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 10:20:40.459558  423957 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-771846" does not appear in /home/jenkins/minikube-integration/21794-292167/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-771846 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-771846 --output=json --layout=cluster: exit status 7 (313.544156ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-771846","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-771846","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1025 10:20:40.774536  424024 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-771846" does not appear in /home/jenkins/minikube-integration/21794-292167/kubeconfig
	E1025 10:20:40.784505  424024 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/insufficient-storage-771846/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-771846" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-771846
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-771846: (1.961170009s)
--- PASS: TestInsufficientStorage (13.86s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (52.32s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2262268521 start -p running-upgrade-567548 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2262268521 start -p running-upgrade-567548 --memory=3072 --vm-driver=docker  --container-runtime=crio: (30.461405334s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-567548 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-567548 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (19.173339571s)
helpers_test.go:175: Cleaning up "running-upgrade-567548" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-567548
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-567548: (1.981250369s)
--- PASS: TestRunningBinaryUpgrade (52.32s)

                                                
                                    
x
+
TestKubernetesUpgrade (359.81s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-845331 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1025 10:22:23.946823  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/functional-900552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-845331 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (41.118221362s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-845331
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-845331: (1.400996724s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-845331 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-845331 status --format={{.Host}}: exit status 7 (102.469877ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-845331 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-845331 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m38.523030133s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-845331 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-845331 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-845331 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (155.060746ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-845331] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21794
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21794-292167/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-292167/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-845331
	    minikube start -p kubernetes-upgrade-845331 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8453312 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-845331 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-845331 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-845331 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.778772155s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-845331" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-845331
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-845331: (2.600635559s)
--- PASS: TestKubernetesUpgrade (359.81s)

                                                
                                    
x
+
TestMissingContainerUpgrade (120.28s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.229164481 start -p missing-upgrade-353666 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.229164481 start -p missing-upgrade-353666 --memory=3072 --driver=docker  --container-runtime=crio: (1m6.142888597s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-353666
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-353666
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-353666 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-353666 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (50.344586064s)
helpers_test.go:175: Cleaning up "missing-upgrade-353666" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-353666
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-353666: (2.232496351s)
--- PASS: TestMissingContainerUpgrade (120.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-704940 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-704940 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (90.607873ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-704940] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21794
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21794-292167/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-292167/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (47.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-704940 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-704940 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (47.257153299s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-704940 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (47.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-704940 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-704940 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (6.05033759s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-704940 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-704940 status -o json: exit status 2 (501.416165ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-704940","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-704940
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-704940: (2.673856998s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-704940 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-704940 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (8.140612967s)
--- PASS: TestNoKubernetes/serial/Start (8.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-704940 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-704940 "sudo systemctl is-active --quiet service kubelet": exit status 1 (478.412687ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (3.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-arm64 profile list --output=json: (2.347572475s)
--- PASS: TestNoKubernetes/serial/ProfileList (3.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-704940
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-704940: (1.318990582s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-704940 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-704940 --driver=docker  --container-runtime=crio: (6.543810179s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-704940 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-704940 "sudo systemctl is-active --quiet service kubelet": exit status 1 (285.020345ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (55.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3595076423 start -p stopped-upgrade-853068 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3595076423 start -p stopped-upgrade-853068 --memory=3072 --vm-driver=docker  --container-runtime=crio: (36.413282979s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3595076423 -p stopped-upgrade-853068 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3595076423 -p stopped-upgrade-853068 stop: (1.239051801s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-853068 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-853068 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.075478714s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (55.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.17s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-853068
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-853068: (1.166923883s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.17s)

                                                
                                    
x
+
TestPause/serial/Start (83.64s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-598105 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1025 10:25:28.676951  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-598105 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m23.635860284s)
--- PASS: TestPause/serial/Start (83.64s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (27.63s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-598105 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-598105 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.611321764s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (27.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-821614 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-821614 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (261.778978ms)

                                                
                                                
-- stdout --
	* [false-821614] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21794
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21794-292167/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-292167/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1025 10:27:18.930993  461462 out.go:360] Setting OutFile to fd 1 ...
	I1025 10:27:18.931199  461462 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:27:18.931212  461462 out.go:374] Setting ErrFile to fd 2...
	I1025 10:27:18.931217  461462 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1025 10:27:18.931480  461462 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21794-292167/.minikube/bin
	I1025 10:27:18.931930  461462 out.go:368] Setting JSON to false
	I1025 10:27:18.932888  461462 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7789,"bootTime":1761380250,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1025 10:27:18.932949  461462 start.go:141] virtualization:  
	I1025 10:27:18.936477  461462 out.go:179] * [false-821614] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1025 10:27:18.940361  461462 notify.go:220] Checking for updates...
	I1025 10:27:18.941160  461462 out.go:179]   - MINIKUBE_LOCATION=21794
	I1025 10:27:18.944630  461462 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1025 10:27:18.947957  461462 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21794-292167/kubeconfig
	I1025 10:27:18.950828  461462 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21794-292167/.minikube
	I1025 10:27:18.953808  461462 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1025 10:27:18.956226  461462 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1025 10:27:18.959641  461462 config.go:182] Loaded profile config "kubernetes-upgrade-845331": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1025 10:27:18.959746  461462 driver.go:421] Setting default libvirt URI to qemu:///system
	I1025 10:27:18.995396  461462 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1025 10:27:18.995503  461462 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1025 10:27:19.098054  461462 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-25 10:27:19.084335273 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1025 10:27:19.098169  461462 docker.go:318] overlay module found
	I1025 10:27:19.101409  461462 out.go:179] * Using the docker driver based on user configuration
	I1025 10:27:19.104510  461462 start.go:305] selected driver: docker
	I1025 10:27:19.104548  461462 start.go:925] validating driver "docker" against <nil>
	I1025 10:27:19.104569  461462 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1025 10:27:19.108299  461462 out.go:203] 
	W1025 10:27:19.111244  461462 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1025 10:27:19.114244  461462 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-821614 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-821614

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-821614

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-821614

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-821614

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-821614

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-821614

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-821614

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-821614

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-821614

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-821614

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821614"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821614"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821614"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-821614

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821614"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821614"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-821614" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-821614" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-821614" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-821614" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-821614" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-821614" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-821614" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-821614" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821614"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821614"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821614"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821614"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821614"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-821614" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-821614" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-821614" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821614"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821614"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821614"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821614"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821614"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 25 Oct 2025 10:27:21 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-845331
contexts:
- context:
cluster: kubernetes-upgrade-845331
extensions:
- extension:
last-update: Sat, 25 Oct 2025 10:27:21 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-845331
name: kubernetes-upgrade-845331
current-context: kubernetes-upgrade-845331
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-845331
user:
client-certificate: /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/kubernetes-upgrade-845331/client.crt
client-key: /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/kubernetes-upgrade-845331/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-821614

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821614"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821614"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821614"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821614"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821614"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821614"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821614"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821614"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821614"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821614"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821614"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821614"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821614"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821614"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821614"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821614"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821614"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-821614"

                                                
                                                
----------------------- debugLogs end: false-821614 [took: 4.798330255s] --------------------------------
helpers_test.go:175: Cleaning up "false-821614" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-821614
E1025 10:27:23.943390  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/functional-900552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/false (5.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (65.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-610853 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-610853 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m5.060958165s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (65.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-610853 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ddea46f9-0802-490e-98fa-48636d4ec6e5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ddea46f9-0802-490e-98fa-48636d4ec6e5] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004309513s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-610853 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-610853 --alsologtostderr -v=3
E1025 10:30:11.751520  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-610853 --alsologtostderr -v=3: (12.006798505s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-610853 -n old-k8s-version-610853
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-610853 -n old-k8s-version-610853: exit status 7 (77.71625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-610853 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (51.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-610853 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1025 10:30:27.008684  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/functional-900552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:30:28.676938  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-610853 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (51.47040077s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-610853 -n old-k8s-version-610853
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (51.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-r2j5g" [b64430c6-825f-484b-9d66-8eb521ff792f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003077944s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-r2j5g" [b64430c6-825f-484b-9d66-8eb521ff792f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003227722s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-610853 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-610853 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-204074 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-204074 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m28.831708948s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (83.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-419185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1025 10:32:23.943217  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/functional-900552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-419185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m23.619254056s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (83.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-204074 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b7d57fd1-b5eb-4724-9e1e-54fb753ba7cc] Pending
helpers_test.go:352: "busybox" [b7d57fd1-b5eb-4724-9e1e-54fb753ba7cc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b7d57fd1-b5eb-4724-9e1e-54fb753ba7cc] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004569358s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-204074 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-204074 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-204074 --alsologtostderr -v=3: (12.0637473s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-204074 -n default-k8s-diff-port-204074
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-204074 -n default-k8s-diff-port-204074: exit status 7 (79.105692ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-204074 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-204074 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-204074 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (50.2648327s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-204074 -n default-k8s-diff-port-204074
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-419185 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8c21ab0b-2754-4861-96bc-2019ef1c2e7d] Pending
helpers_test.go:352: "busybox" [8c21ab0b-2754-4861-96bc-2019ef1c2e7d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8c21ab0b-2754-4861-96bc-2019ef1c2e7d] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003228065s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-419185 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-419185 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-419185 --alsologtostderr -v=3: (12.578339133s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-419185 -n embed-certs-419185
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-419185 -n embed-certs-419185: exit status 7 (69.620411ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-419185 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (50.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-419185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-419185 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (50.272374629s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-419185 -n embed-certs-419185
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (50.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-cf6hc" [63248964-f275-4a0a-af79-0a05bd9965bb] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003049713s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-cf6hc" [63248964-f275-4a0a-af79-0a05bd9965bb] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003546361s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-204074 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-204074 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (69.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-768303 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-768303 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m9.825339926s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (69.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8v7z6" [0c078832-35bc-42be-83c1-88cc29206272] Running
E1025 10:34:57.764739  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:34:57.771302  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:34:57.782660  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:34:57.804014  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:34:57.845419  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:34:57.926865  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:34:58.088743  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:34:58.410027  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:34:59.051658  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:35:00.342384  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004307259s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8v7z6" [0c078832-35bc-42be-83c1-88cc29206272] Running
E1025 10:35:02.904326  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004842543s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-419185 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-419185 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (45.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-491554 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1025 10:35:18.267875  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:35:28.677910  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:35:38.749800  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-491554 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (45.533926639s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (45.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-768303 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [d33e33c4-4af4-48a5-94f1-bc1b25bbdda6] Pending
helpers_test.go:352: "busybox" [d33e33c4-4af4-48a5-94f1-bc1b25bbdda6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [d33e33c4-4af4-48a5-94f1-bc1b25bbdda6] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.00333216s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-768303 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-768303 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-768303 --alsologtostderr -v=3: (12.167811367s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-491554 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-491554 --alsologtostderr -v=3: (1.356837698s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-491554 -n newest-cni-491554
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-491554 -n newest-cni-491554: exit status 7 (68.004032ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-491554 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-491554 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-491554 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (16.23547623s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-491554 -n newest-cni-491554
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-768303 -n no-preload-768303
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-768303 -n no-preload-768303: exit status 7 (120.672083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-768303 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (62.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-768303 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1025 10:36:19.711195  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-768303 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m2.237029698s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-768303 -n no-preload-768303
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (62.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-491554 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-821614 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-821614 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m25.995158285s)
--- PASS: TestNetworkPlugins/group/auto/Start (86.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mk9wc" [0d4d79bb-285b-4203-977e-605847831432] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003657488s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mk9wc" [0d4d79bb-285b-4203-977e-605847831432] Running
E1025 10:37:23.942989  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/functional-900552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003543904s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-768303 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-768303 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (84.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-821614 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1025 10:37:41.633288  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-821614 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m24.372540712s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (84.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-821614 "pgrep -a kubelet"
I1025 10:38:02.488149  294017 config.go:182] Loaded profile config "auto-821614": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-821614 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nm5ms" [7fe24552-204e-49db-bfa3-b5f99f1e1be8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1025 10:38:03.552830  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/default-k8s-diff-port-204074/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:38:03.559195  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/default-k8s-diff-port-204074/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:38:03.570531  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/default-k8s-diff-port-204074/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:38:03.591854  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/default-k8s-diff-port-204074/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:38:03.633198  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/default-k8s-diff-port-204074/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:38:03.714508  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/default-k8s-diff-port-204074/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:38:03.876418  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/default-k8s-diff-port-204074/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:38:04.198469  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/default-k8s-diff-port-204074/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:38:04.840821  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/default-k8s-diff-port-204074/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:38:06.122507  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/default-k8s-diff-port-204074/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:38:08.684849  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/default-k8s-diff-port-204074/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-nm5ms" [7fe24552-204e-49db-bfa3-b5f99f1e1be8] Running
E1025 10:38:13.806547  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/default-k8s-diff-port-204074/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.004507123s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-821614 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-821614 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-821614 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (61.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-821614 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1025 10:38:44.529492  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/default-k8s-diff-port-204074/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-821614 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m1.039593551s)
--- PASS: TestNetworkPlugins/group/calico/Start (61.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-gq4cq" [4a966f8b-e611-498f-b1e4-6788736d3198] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004481269s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-821614 "pgrep -a kubelet"
I1025 10:39:09.479369  294017 config.go:182] Loaded profile config "kindnet-821614": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-821614 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-45t94" [fb9588d0-e03e-4fa5-a5b8-e59bbed8e670] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-45t94" [fb9588d0-e03e-4fa5-a5b8-e59bbed8e670] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.003556819s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-821614 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-821614 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-821614 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-zmzs6" [d1d35ee2-3096-4de9-8a43-3ae3160ba867] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-zmzs6" [d1d35ee2-3096-4de9-8a43-3ae3160ba867] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003346199s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-821614 "pgrep -a kubelet"
I1025 10:39:44.620272  294017 config.go:182] Loaded profile config "calico-821614": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-821614 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bg7nx" [b621afc7-99cd-4aa3-b488-833e6502679c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-bg7nx" [b621afc7-99cd-4aa3-b488-833e6502679c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004023803s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (71.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-821614 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-821614 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m11.041741876s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (71.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-821614 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-821614 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-821614 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (77.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-821614 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1025 10:40:25.474697  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/old-k8s-version-610853/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:40:28.676986  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/addons-523976/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:40:47.413830  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/default-k8s-diff-port-204074/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:40:49.768374  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:40:49.774724  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:40:49.786096  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:40:49.807455  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:40:49.848822  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:40:49.930237  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:40:50.092308  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:40:50.414065  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:40:51.056066  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:40:52.337418  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1025 10:40:54.898803  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-821614 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m17.860886623s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (77.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-821614 "pgrep -a kubelet"
I1025 10:40:57.029013  294017 config.go:182] Loaded profile config "custom-flannel-821614": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-821614 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ldnkv" [c6394bd7-4b8e-407f-b3a0-981cfa2122b5] Pending
E1025 10:41:00.020427  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/no-preload-768303/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-ldnkv" [c6394bd7-4b8e-407f-b3a0-981cfa2122b5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.003026402s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-821614 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-821614 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-821614 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (63.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-821614 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-821614 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m3.566536961s)
--- PASS: TestNetworkPlugins/group/flannel/Start (63.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-821614 "pgrep -a kubelet"
I1025 10:41:42.150490  294017 config.go:182] Loaded profile config "enable-default-cni-821614": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-821614 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gmcl9" [fa6e75ed-e30f-4c2b-a9ca-ecac45dbd27e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gmcl9" [fa6e75ed-e30f-4c2b-a9ca-ecac45dbd27e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.00388562s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-821614 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-821614 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-821614 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (75.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-821614 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1025 10:42:23.942894  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/functional-900552/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-821614 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m15.356128828s)
--- PASS: TestNetworkPlugins/group/bridge/Start (75.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-4smf7" [dde4b1a5-b312-4baf-9a35-8367cfec6062] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003987387s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-821614 "pgrep -a kubelet"
I1025 10:42:41.508499  294017 config.go:182] Loaded profile config "flannel-821614": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-821614 replace --force -f testdata/netcat-deployment.yaml
I1025 10:42:41.873703  294017 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jhjlv" [e9b71c2d-0c7f-4e2f-9def-3d294d548f96] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jhjlv" [e9b71c2d-0c7f-4e2f-9def-3d294d548f96] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.003036347s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-821614 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-821614 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-821614 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-821614 "pgrep -a kubelet"
I1025 10:43:35.379046  294017 config.go:182] Loaded profile config "bridge-821614": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-821614 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jsmk5" [070d9f91-d7d3-4df0-887c-516dc8b846b4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jsmk5" [070d9f91-d7d3-4df0-887c-516dc8b846b4] Running
E1025 10:43:43.798012  294017 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/auto-821614/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004381784s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-821614 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-821614 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-821614 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    

Test skip (30/326)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.67s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-545529 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-545529" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-545529
--- SKIP: TestDownloadOnlyKic (0.67s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-533631" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-533631
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-821614 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-821614

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-821614

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-821614

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-821614

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-821614

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-821614

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-821614

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-821614

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-821614

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-821614

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821614"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821614"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821614"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-821614

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821614"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821614"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-821614" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-821614" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-821614" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-821614" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-821614" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-821614" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-821614" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-821614" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821614"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821614"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821614"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821614"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821614"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-821614" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-821614" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-821614" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821614"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821614"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821614"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821614"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821614"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 25 Oct 2025 10:22:58 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-845331
contexts:
- context:
cluster: kubernetes-upgrade-845331
user: kubernetes-upgrade-845331
name: kubernetes-upgrade-845331
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-845331
user:
client-certificate: /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/kubernetes-upgrade-845331/client.crt
client-key: /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/kubernetes-upgrade-845331/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-821614

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821614"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821614"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821614"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821614"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821614"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821614"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821614"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821614"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821614"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821614"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821614"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821614"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821614"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821614"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821614"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821614"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821614"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-821614"

                                                
                                                
----------------------- debugLogs end: kubenet-821614 [took: 5.208519841s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-821614" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-821614
--- SKIP: TestNetworkPlugins/group/kubenet (5.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-821614 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-821614

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-821614

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-821614

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-821614

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-821614

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-821614

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-821614

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-821614

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-821614

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-821614

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821614"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821614"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821614"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-821614

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821614"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821614"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-821614" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-821614" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-821614" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-821614" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-821614" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-821614" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-821614" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-821614" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821614"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821614"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821614"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821614"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821614"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-821614

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-821614

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-821614" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-821614" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-821614

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-821614

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-821614" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-821614" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-821614" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-821614" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-821614" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821614"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821614"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821614"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821614"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821614"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21794-292167/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 25 Oct 2025 10:27:21 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-845331
contexts:
- context:
cluster: kubernetes-upgrade-845331
extensions:
- extension:
last-update: Sat, 25 Oct 2025 10:27:21 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-845331
name: kubernetes-upgrade-845331
current-context: kubernetes-upgrade-845331
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-845331
user:
client-certificate: /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/kubernetes-upgrade-845331/client.crt
client-key: /home/jenkins/minikube-integration/21794-292167/.minikube/profiles/kubernetes-upgrade-845331/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-821614

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821614"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821614"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821614"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821614"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821614"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821614"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821614"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821614"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821614"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821614"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821614"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821614"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821614"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821614"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821614"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821614"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821614"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-821614" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-821614"

                                                
                                                
----------------------- debugLogs end: cilium-821614 [took: 5.747487223s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-821614" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-821614
--- SKIP: TestNetworkPlugins/group/cilium (5.97s)

                                                
                                    
Copied to clipboard